id
string | year
int64 | authors
string | title
string | language
string | num_authors
int64 | num_women_authors
int64 | woman_as_first_author
int64 | affiliations
string | num_affilitations
int64 | at_least_one_international_affiliation
int64 | international_affilitation_only
int64 | international_authors
int64 | names_international_authors
string | num_compay_authors
int64 | names_company_authors
string | countries_affilitations
string | cities_affiliations
string | abstract
string | introduction
string | conclusion
string | num_topics
int64 | topics
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1_2014
| 2,014
|
Azad Abad, Alessandro Moschitti
|
Creating a standard for evaluating Distant Supervision for Relation Extraction
|
ENG
| 2
| 0
| 0
|
Università di Trento, Qatar Computing Research Institute
| 2
| 1
| 0
| 1
|
Alessandro Moschitti
| 0
|
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
This paper defines a standard for comparing relation extraction (RE) systems based on a Distant Supervision (DS). We integrate the well-known New York Time corpus with the more recent version of Freebase. Then, we define a simpler RE system based on DS, which exploits SVMs, tree kernels and a simple one-vs-all strategy. The resulting model can be used as a baseline for system comparison. We also study several example filtering techniques for improving the quality of the DS output.
|
Currently, supervised learning approaches are widely used to train relation extractors. However, manually providing large-scale human-labeled training data is costly in terms of resources and time. Besides, (i) a small-size corpus can only contains few relation types and (ii) the resulting trained model is domain-dependent. Distance Supervision (DS) is an alternative approach to overcome the problem of data annotation (Craven et al., 1999) as it can automatically generate training data by combining (i) a structured Knowledge Base (KB), e.g., Freebase1 with a large-scale unlabeled corpus, C. The basic idea is: given a tuple r<e1,e2> contained in a referring KB, if both e1 and e2 appear in a sentence of C, that sentence is assumed to express the relation type r, i.e., it is considered a training sentence for r. For example, given the KB relation, president(Obama,USA), the following sentence, Obama has been elected in the USA presidential campaign, can be used as a positive training example for president(x,y). However, DS suffers from two major drawbacks: first, in early studies, Mintz et al. (2009) assumed that two entity mentions cannot be in a relation with different relation types r1 and r2. In contrast, Hoffmann et al. (2011) showed that 18.3% of the entities in Freebase that also occur in the New York Times 2007 corpus (NYT) overlap with more than one relation type. Second, although DS method has shown some promising results, its accuracy suffers from noisy training data caused by two types of problems (Hoffmann et al., 2011; Intxaurrondo et al., 2013; Riedel et al., 2010): (i) possible mismatch between the sentence semantics and the relation type mapped in it, e.g., the KB correct relation, located in(Renzi,Rome), cannot be mapped into the sentence, Renzi does not love the Rome soccer team; and (ii) coverage of the KB, e.g., a sentence can express relations that are not in the KB (this generates false negatives). Several approaches for selecting higher quality training sentences with DS have been studies but comparing such methods is difficult for the lack of well-defined benchmarks and models using DS. In this paper, we aim at building a standard to compare models based on DS: first of all, we considered the most used corpus in DS, i.e., the combination of NYT and Freebase (NYT-FB). Secondly, we mapped the Freebase entity IDs used in NYT-FB from the old version of 2007 to the newer Freebase 2014. Since entities changed, we asked an annotator to manually tag the entity mentions in the sentence. As the result, we created a new dataset usable as a stand-alone DS corpus, which we make available for research purposes. Questo e-book appartiene a AlessandroLenci Finally, all the few RE models experimented with NYT-FB in the past are based on a complex conditional random fields. This is necessary to encode the dependencies between the overlapping relations. Additionally, such models use very particular and sparse features, which make the replicability of the models and results complex, thus limiting the research progress in DS. Indeed, for comparing a new DS approach with the previous work using NYT-FB, the researcher is forced to re-implement a very complicated model and its sparse features. Therefore, we believe that simpler models can bevery useful as (i) a much simpler reimplementation would enable model comparisons and (ii) it would be easier to verify if a DS method is better than another. In this perspective, our proposed approach is based on convolution tree kernels, which can easily exploit syntactic/semantic structures. This is an important aspect to favor replicability of our results. Moreover, our method differers from previous state of the art on overlapping relations (Riedel et al., 2010) as we apply a modification of the simple one-vs-all strategy, instead of the complex graphical models. To make our approach competitive, we studied several parameters for optimizing SVMsandfiltering out noisy negative training examples. Our extensive experiments show that our models achieve satisfactory results.
|
We have proposed a standard framework, simple RE models and an upgraded version of NYT-FB for more easily measuring the research progress in DS research. Our RE model is based on SVMs, can manage overlapping relations and exploit syntactic information and lexical features thanks to tree kernels. Additionally, we have shown that filtering techniques applied to DS data can discard noisy examples and significantly improve the RE accuracy.
| 7
|
Lexical and Semantic Resources and Analysis
|
2_2014
| 2,014
|
Paolo Annesi, Danilo Croce, Roberto Basili
|
Towards Compositional Tree Kernels
|
ENG
| 3
| 0
| 0
|
Università di Roma Tor Vergata
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome
|
Several textual inference tasks rely on kernel-based learning. In particular Tree Kernels (TKs) proved to be suitable to the modeling of syntactic and semantic similarity between linguistic instances. In order to generalize the meaning of linguistic phrases, Distributional Compositional Semantics (DCS) methods have been defined to compositionally combine the meaning of words in semantic spaces. However, TKs still do not account for compositionality. A novel kernel, i.e. the Compositional Tree Kernel, is presented integrating DCS operators in the TK estimation. The evaluation over Question Classification and Metaphor Detection shows the contribution of semantic compositions w.r.t. traditional TKs.
|
Tree Kernels (TKs) (Collins and Duffy, 2001) are consolidated similarity functions used in NLP for their ability in capturing syntactic information directly from parse trees and used to solve complex tasks such as Question Answering (Moschitti et al., 2007) or Semantic Textual Similarity (Croce et al., 2012). The similarity between parse tree structures is defined in terms of all possible syntagmatic substructures. Recently, the Smoothed Partial Tree Kernel (SPTK) has been defined in (Croce et al., 2011): the semantic information of the lexical nodes in a parse tree enables a smoothed similarity between structures, which are partially similar and whose nodes can differ but are nevertheless related. Semantic similarity between words is evaluated in terms of vector similarity in a Distributional Semantic Space (Sahlgren, 2006; Turney and Pantel, 2010; Baroni and Lenci, 2010). Even if achieving higher performances w.r.t. traditional TKs, the main limitations of SPTK are that the discrimination between words is delegated only to the lexical nodes and semantic composition of words is not considered. Questo e-book appartiene a AlessandroLenci Weinvestigate a kernel function that exploits semantic compositionality to measures the similarity between syntactic structures. In our perspective the semantic information should be emphasized by compositionally propagating lexical information over an entire parse tree, making explicit the head/modifier relationships between words. It enables the application of Distributional Compositional Semantic (DCS) metrics, that combine lexical representations by vector operator into the distributional space (Mitchell and Lapata, 2008; Erk and Pado, 2008; Zanzotto et al., 2010; Baroni and Lenci, 2010; Grefenstette and Sadrzadeh, 2011; Blacoe and Lapata, 2012; Annesi et al., 2012), within the TKs computation. The idea is to i) def ine a procedure to mark nodes of a parse tree that allows to spread lexical bigrams across the tree nodes ii) apply DCS smoothing metrics between such compositional nodes iii) enrich the SPTK formulation with compositional distributional semantics. The resulting model has been called Compositional Smoothed Partial Tree Kernel (CSPTK). The entire process of marking parse trees is described in Section 2. Therefore, in Section 3 the CSPTK is presented. Finally, in Section 4, the evaluations over Question Classification and Metaphor Detection tasks are shown.
|
In this paper, a novel kernel function has been proposed in order to exploit Distributional Compositional operators within Tree Kernels. The proposed approach propagates lexical semantic information over an entire tree, by building a Compositionally labeled Tree. The resulting Compositional Smoothed Partial Tree Kernel measures the semantic similarity between complex linguistic structures by applying metrics sensible to distributional compositional semantics. Empirical results in the Question Classification and Metaphor Detection tasks demonstrate the positive contribution of compositional information for the generalization capability within the proposed kernel.
| 22
|
Distributional Semantics
|
3_2014
| 2,014
|
Zhenisbek Assylbekov, Assulan Nurkas
|
Initial Explorations in Kazakh to English Statistical Machine Translation
|
ENG
| 2
| 0
| 0
|
Nazarbayev University
| 1
| 1
| 1
| 2
|
Zhenisbek Assylbekov, Assulan Nurkas
| 0
|
0
|
Kazakhstan
|
Astana
|
This paper presents preliminary results of developing a statistical machine translation system from Kazakh to English. Starting with a baseline model trained on 1.3K and then on 20K aligned sentences, we tried to cope with the complex morphology of Kazakh by applying different schemes of morphological word segmentation to the training and test data. Morphological segmentation appears to benefit our system: our best segmentation scheme achieved a 28% reduction of out-of-vocabulary rate and 2.7 point BLEU improvement above the baseline.
|
The availability of considerable amounts of parallel texts in Kazakh and English has motivated us to apply statistical machine translation (SMT) paradigm for building a Kazakh-to-English machine translation system using publicly available data and open-source tools. The main ideas of SMT were introduced by researchers at IBM’s Thomas J. Watson Research Center (Brown et al., 1993). This paradigm implies that translations are generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. We show how one can compile a Kazakh-English parallel corpus from publicly available resources in Section 2. It is well known that challenges arise in statistical machine translation when we deal with languages with complex morphology, e.g. Kazakh. However recently there were attempts to tackle such challenges for similar languages by morphological pre-processing of the source text (Bisazza and Federico, 2009; Habash and Sadat, 2006; Mermer, 2010). We apply morphological preprocessing techniques to Kazakh side of our corpus and show how they improve translation performance in Sections 5 and 6.
|
The experiments have shown that a selective morphological segmentation improves the performance of an SMT system. One can see that in contrast to Bisazza and Federico’s results (2009), in our case MS11 downgrades the translation performance. One of the reasons for this might be that Bisazza and Federico considered translation of spokenl anguage in which sentences weres horter on average than in our corpora. In this work we mainly focused on nominal suffixation. In our future work we are planning to: increase the dictionary of morphological transducer – currently it covers 93.3% of our larger corpus; improve morphological disambiguation using e.g. perceptron algorithm (Saket al., 2007); develop more segmentation rules for verbs and other parts of speech; mine more mono- and bilingual data using officia lwebsites of Kazakhstan’s public authorities.
| 10
|
Machine Translation
|
4_2014
| 2,014
|
Giuseppe Attardi, Vittoria Cozza, Daniele Sartiano
|
Adapting Linguistic Tools for the Analysis of Italian Medical Records
|
ENG
| 3
| 1
| 0
|
Università di Pisa
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
We address the problem of recognition of medical entities in clinical records written in Italian. We report on experiments performed on medical data in English provided in the shared tasks at CLEF-ER 2013 and SemEval 2014. This allowed us to refine Named Entity recognition techniques to deal with the specifics of medical and clinical language in particular. We present two approaches for transferring the techniques to Italian. One solution relies on the creation of an Italian corpus of annotated clinical records and the other on adapting existing linguistic tools to the medical domain.
|
One of the objectives of the RIS project (RIS 2014) is to develop tools and techniques to help identifying patients at risk of evolving their disease into a chronic condition. The study relies on a sample of patient data consisting of both medical test reports and clinical records. We are interested in verifying whether text analytics, i.e. information extracted from natural language texts, can supplement or improve information extracted from the more structured data available in the medical test records. Clinical records are expressed as plain text in natural language and contain mentions of diseases or symptoms affecting a patient, whose accurate identification is crucial for any further text mining process. Our task in the project is to provide a set of NLP tools for extracting automatically information from medical reports in Italian. We are facing the double challenge of adapting NLP tools to the medical domain and of handling documents in a language (Italian) for which there are few available linguistic resources. Our approach to information extraction exploits both supervised machine-learning tools, which require annotated training corpora, and unsupervised deep learning techniques, in order to leverage unlabeled data. For dealing with the lack of annotated Italian resources for the bio-medical domain, we attempted to create a silver corpus with a semi-automatic approach that uses both machine translation and dictionary based techniques. The corpus will be validated through crowdsourcing.
|
We presented a series of experiments on biomedical texts from both medical literature and clinical records, in multiple languages, that helped us to refine the techniques of NE recognition and to adapt them to Italian. We explored supervised techniques as well as unsupervised ones, in the form of word embeddings or word clusters. We also developed a Deep Learning NE tagger that exploits embeddings. The best results were achieved by using a MEMM sequence labeler using clusters as features improved in an ensemble combination with other NE taggers. As an further contribution of our work, we produced, by exploiting semi-automated techniques, an Italian corpus of medical records, annotated with mentions of medical terms.
| 20
|
In-domain IR and IE
|
5_2014
| 2,014
|
Alessia Barbagli, Pietro Lucisano, Felice Dell'Orletta, Simonetta Montemagni
|
Tecnologie del linguaggio e monitoraggio dell'evoluzione delle abilità di scrittura nella scuola secondaria di primo grado
|
ITA
| 5
| 2
| 1
|
Sapienza Università di Roma, CNR-ILC
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome, Pisa
|
The last decade has seen the increased international use of language technologies for the study of learning processes. This contribution, which is placed within a wider research of experimental pedagogy, reports the first and promising results of a study aimed at monitoring the evolution of the Italian language learning process conducted from the written production of students with tools of automatic linguistic notification and knowledge extraction.
|
The use of language technologies for the study of learning processes and, in more applied terms, the construction of the so-called Intelligent Computer-Assisted Language Learning systems (ICALL) `and more and more at the center of interdisciplinary research that aims to highlight how methods and tools of automatic language notification and knowledge extraction are now mature to be used also in the educational and school context. On the international level, this is demonstrated by the success of the Workshop on Innovative Use of NLP for Building Educational Applications (BEA), arrived puts in this research context, reporting the first results of a study still ongoing, aimed at describing, with means of quantitative and qualitative character, the evolution of the writing skills, both at the level of text content and linguistic skills, from the first to the second class of the secondary school. It is an exploration work aimed at building an empirical analysis model that allows the observation of the processes and products of the teaching of written production. The innovative character of this research in the national and international landscape is placed at various levels. The research described here represents the first study aimed at monitoring the evolution of the language learning process of the Italian language conducted from the written production of the students and with tools of automatic language notification and knowledge extraction. The use of language technologies for monitoring the evolution of language skills of learners drops the roots in a branch of studies launched internationally in the last decade and within which language analysis generated by tools of automatic language treatment are used, for example, to: monitor the development of synthesis in child language (Sagae et al., 2005; Lu, 2007); identify cognitive deficits through measures of synthetic complexity (Roark et al., 2007) or of semantic association (Rouhizadeh et al., 2013); monitor the reading capacity as a central component of language competence (Schwarmend and Ostendorf, 2005; and Petersen, 2009). Take the moves from this research branch, Dell’Orletta and Montemagni (2012) and Dell’Orletta et al. (2011) they showed in two feasibility studies that linguistic-computer technologies can play a central role in evaluating the Italian language skills of students in the school and in tracing their evolution over time. This contribution represents an original and innovative development of this research line, as the language monitoring methodology proposed `and used within a wider study of experimental pedagogy, based on a significant corpus of written productions of students and aimed at tracing the evolution of skills in a diacronic and/or socio-cultural perspective. The subject of the analyses represents another element of novit`a: `and was chosen the first twentieth of first-degree secondary school as a school area to be analyzed because' and little investigated by empirical research and because' and have been few so far the studies that have verified the effective teaching practice derived from the indications provided by the ministerial programs relating to this school cycle, from 1979 to the National Indications of 2012.
|
The comparative monitoring of the language characteristics traced in the corpus of common tests carried out in the first and second year was carried out with the aim of tracing the evolution of the language skills of the students in the two years. The ANOVA common evidence shows that there are significant differences between the first and second years at all levels of linguistic analysis considered. For example, compared to the Odi base’ cattery, it turns out that the variation of the average number of tokens per phrase in the two tests of the two years is significant. While the tests written in the first year contain long phrases in an average of 23.82 tokens, the average length of the tests in the second year is 20.71 tokens. Significant `and also the variation in the use of VdB-related voices, which decreases from 83% of the vocabulary in the first year tests to 79% in the second year, such as the TTR values (related to the first 100 tokens), which increase by passing from 0.66 to 0.69. In both cases, such changes can be seen as a result of a lessical enrichment. When it comes to the morphosynthetic level, the characteristics that capture the use of times and verbal ways are significant. At the level of synthetic monitoring, `and for example the use of the object supplement in pre– or post-verbal position to vary significantly. If in the first year’s tests 19% of the subject complements are in pre-verbal position, in the second year the percentage decreases by passing to 13%; while in the first year the post-verbal complements are 81% and increase by passing to 87% in the second year. In the tests of the second year therefore is observed a greater respect for the canonical order subject-word-object, closer to the rules of the writing than of the spoken. Although the results are still preliminary compared to the wider context of research, we believe they clearly show the potential of the meeting between computing and educational linguistics, opening new research prospects. Current activities include the analysis of the correlation between the evidence acquired through linguistic monitoring and the process and background variables such as the study of the evolution of the language skills of the individual student.
| 8
|
Learner Corpora and Language Acquisition
|
6_2014
| 2,014
|
Francesco Barbieri, Francesco Ronzano, Horacio Saggion
|
Italian Irony Detection in Twitter: a First Approach
|
ENG
| 3
| 0
| 0
|
Universitat Pompeu Fabra
| 1
| 1
| 1
| 3
|
Zhenisbek Assylbekov, Assulan Nurkas
| 0
|
0
|
Spain
|
Barcelona
|
Irony is a linguistic device used to say something but meaning something else. The distinctive trait of ironic utterances is the opposition of literal and intended meaning. This characteristic makes the automatic recognition of irony a challenging task for current systems. In this paper we present and evaluate the first automated system targeted to detect irony in Italian Tweets, introducing and exploiting a set of linguistic features useful for this task.
|
Sentiment Analysis is the interpretation of attitudes and opinions of subjects on certain topics. With the growth of social networks, Sentiment Analysis has become fundamental for customer reviews, opinion mining, and natural language user interfaces (Yasavur et al., 2014). During the last decade the number of investigations dealing with sentiment analysis has considerably increased, targeting most of the time English language. Comparatively and to the best of our knowledge there are only few works for the Italian language. In this paper we explore an important sentiment analysis problem: irony detection. Irony is a linguistic device used to say something when meaning something else (Quintilien and Butler, 1953). Dealing with figurative languages is one of the biggest challanges to correctly determine the polarity of a text: analysing phrases where literal and indented meaning are not the same, is hard for humans, hence even harder for machines. Moreover, systems able of detect irony can benefit also other A.I. areas like Human Computer Interaction. Approaches to detect irony have been already proposed for English, Portuguese and Dutch texts (see Section 2). Some of these systems used words, or word-patterns as irony detection features (Davidov et al., 2010; Gonz´ alez-Ib´ a˜ nez et al., 2011; Reyes et al., 2013; Buschmeier et al., 2014). Other approaches, like Barbieri and Saggion (2014a), exploited lexical and semantic features of single words like their frequency in reference corpora or the number of associated synsets. Relying on the latter method, in this paper we present the first system for automatic detection of irony in Italian Tweets. In particular, we investigate the effectiveness of Decision Trees in classifying Tweets as ironic or not ironic, showing that classification performances increase by considering lexical and semantic features of single words instead of pure bag-of-words (BOW) approaches. To train our system, we exploited as ironic examples the Tweets from the account of a famous collective blog named Spinoza and as not ironic examples the Tweets retrieved from the timelines of seven popular Italian newspapers.
|
In this study we evaluate a novel system to detect irony in Italian, focusing on Tweets. We tackle this problem as binary classification, where the ironic examples are posts of the Twitter account Spinoza and the non-ironic examples are Tweets from seven popular Italian newspapers. We evaluated the effectiveness of Decision Trees with different feature sets to carry out this classification task. Our system only focuses on characteristics on lexical and semantic information that characterises each word, rather than the words themselves as features. The performance of the system is good if compared to our baseline (BOW) considering only word occurrences as features, since we obtain an F1 improvement of 0.11. This result shows the suitability of our approach to detect ironic Italian Tweets. However, there is space to enrich and tune the model as this is only a first approach. It is possible to both improve the model with new features (for example related to punctuation or language models) and evaluate the system on new and extended corpora of Italian Tweets as they become available. Another issue we faced is the lack of accurate evaluations of features performance considering distinct classifiers / algorithms for irony detection.
| 6
|
Sentiment, Emotion, Irony, Hate
|
7_2014
| 2,014
|
Gianni Barlacchi, Massimo Nicosia, Alessandro Moschitti
|
A Retrieval Model for Automatic Resolution of Crossword Puzzles in Italian Language
|
ENG
| 3
| 0
| 0
|
Università di Trento, Qatar Computing Research Institute
| 2
| 1
| 0
| 2
|
Massimo Nicosia, Alessandro Moschitti
| 0
|
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
In this paper we study methods for improving the quality of automatic ex-traction of answer candidates for an ex-tremely challenging task: the automatic resolution of crossword puzzles for Italian language. Many automatic crossword puz-zle solvers are based on database system accessing previously resolved crossword puzzles. Our approach consists in query-ing the database (DB) with a search engine and converting its output into a probability score, which combines in a single scoring model, i.e., a logistic regression model, both the search engine score and statisti-cal similarity features. This improved re-trieval model greatly impacts the resolu-tion accuracy of crossword puzzles.
|
Crossword Puzzles (CPs) are probably one of the most popular language game. Automatic CP solvers have been mainly targeted by the artificial intelligence (AI) community, who has mostly fo-cused on AI techniques for filling the puzzle grid, given a set of answer candidates for each clue. The basic idea is to optimize the overall probability of correctly filling the entire grid by exploiting the likelihood of each candidate answer, fulfilling at the same time the grid constraints. After several failures in approaching the human expert perfor-mance, it has become clear that designing more accurate solvers would not have provided a win-ning system. In contrast, the Precision and Recall of the answer candidates are obviously a key fac-tor: very high values for these performance mea-sures would enable the solver to quickly find the correct solution. Similarly to the Jeopardy! challenge case (Fer-rucci et al., 2010), the solution relies on Ques-tion Answering (QA) research. However, although some CP clues are rather similar to standard ques-tions, there are some specific differences: (i) clues can be in interrogative form or not, e.g., Capi-tale d’Italia: Roma; (ii) they can contain riddles or be deliberately ambiguous and misleading (e.g., Se fugge sono guai: gas ); (iii) the exact length of the answer keyword is known in advance; and (vi) the confidence in the answers is an extremely important input for the CP solver. Questo e-book appartiene a AlessandroLenci There have been many attempts to build automatic CP solving systems. Their goal is to outperform human players in solving crosswords more accurately and in less time. Proverb (Littman et al., 2002) was the first system for the automatic resolution of CPs. It includes several modules for generating lists of candidate answers. These lists are merged and used to solve a ProbabilisticConstraint Satisfaction Problem. Proverb relies on a very large crossword database as well as several expert modules, each of them mainly based on domain-specific databases (e.g., movies, writers and geography). WebCrow (Ernandes et al., 2005) is based on Proverb. In addition to its predecessor, WebCrow carries out basic linguistic analysis such as Part-Of-Speech tagging and lemmatization. It takes advantage of semantic relations contained in WordNet, dictionaries and gazetteers. Its Web module is constituted by a search engine, which can retrieve text snippets or documents related to the clue. WebCrow uses a WA* algorithm (Pohl, 1970) for Probabilistic-Constraint Satisfaction Problems, adapted for CP resolution. To the best of our knowledge, the state-of-the-art system for automatic CP solving is Dr. Fill (Ginsberg, 2011). It targets the crossword filling task with a Weighted-Constraint Satisfaction Problem. Constraint violations are weighted and can be tolerated. It heavily relies on huge databases of clues. All of these systems queries the DB of previously solved CP clues using standard techniques, e.g., SQL Full-Text query. The DB is a very rich and important knowledge base. In order to improve the quality of the automatic extraction of answer candidate lists from DB, we provide for the Italian language a completely novel solution, by substituting the DB and the SQL function with a search engine for retrieving clues similar to the target one. In particular, we define a reranking function for the retrieved clues based on a logistic regression model (LRM), which combines the search engine score with other similarity features. To carry out our study, we created a clue similarity dataset for the Italian language. This dataset constitutes an interesting resource that we made available to the research community.
|
In this paper we improve the answer extraction from DB for automatic CP resolution. We combined the state-of-the-art BM25 retrieval model and an LRM by converting the BM25 score into a probability score for each answer candidate. For our study and to test our methods, we created a corpora for clue similarity containing clues in Italian. We improve on the lists generated by WebCrow by 8.5 absolute percent points in MRR. However, the end-to-end CP resolution test does not show a large improvement as the percentage of retrieved clues is not high enough.
| 1
|
Language Models
|
8_2014
| 2,014
|
Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro
|
Analysing Word Meaning over Time by Exploiting Temporal Random Indexing
|
ENG
| 3
| 1
| 0
|
Università di Bari Aldo Moro
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Bari
|
This paper proposes an approach to the construction of WordSpaces which takes into account temporal information. The proposed method is able to build a geometrical space considering several pe-riods of time. This methodology enables the analysis of the time evolution of the meaning of a word. Exploiting this ap-proach, we build a framework, called Tem-poral Random Indexing (TRI) that pro-vides all the necessary tools for building WordSpaces and performing such linguis-tic analysis. We propose some examples of usage of our tool by analysing word meanings in two corpora: a collection of Italian books and English scientific papers about computational linguistics.
|
The analysis of word-usage statistics over huge corpora has become a common technique in many corpus linguistics tasks, which benefit from the growth rate of available digital text and compu-tational power. Better known as Distributional Semantic Models (DSM), such methods are an easy way for building geometrical spaces of con-cepts, also known as Semantic (or Word) Spaces, by skimming through huge corpora of text in order to learn the context of usage of words. In the re-sulting space, semantic relatedness/similarity be-tween two words is expressed by the closeness be-tween word-points. Thus, the semantic similarity can be computed as the cosine of the angle be-tween the two vectors that represent the words. DSM can be built using different techniques. One common approach is the Latent Semantic Analysis (Landauer and Dumais, 1997), which is based on the Singular Value Decomposition of the word co-occurrence matrix. However, many other methods that try to take into account the word order (Jones and Mewhort, 2007) or predications (Cohen et al., 2010) have been proposed. Recursive Neural Net-work (RNN) methodology (Mikolov et al., 2010) and its variant proposed in the word2vect frame-work (Mikolov et al., 2013) based on the con-tinuous bag-of-words and skip-gram model take a complete new perspective. However, most of these techniques build such SemanticSpaces tak-ing a snapshot of the word co-occurrences over the linguistic corpus. This makes the study of seman-tic changes during different periods of time diffi-cult to be dealt with. In this paper we show how one of such DSM techniques, called Random Indexing (RI) (Sahlgren, 2005; Sahlgren, 2006), can be easily extended to allow the analysis of semantic changes of words over time. The ultimate aim is to provide a tool which enables to understand how words change their meanings within a document corpus as a function of time. We choose RI for two main reasons: 1) the method is incremental and requires few computational resources while still retaining good performance; 2) the methodology for building the space can be easily expanded to integrate temporal information. Indeed, the disadvantage of classical DSM approaches is that WordSpaces built on different corpus are not comparable: it is always possible to compare similarities in terms of neighbourhood words or to combine vectors by geometrical operators, such as the tensor product, but these techniques do not allow a direct comparison of vectors belonging to two different spaces. Our approach based on RI is able to build a WordSpace on different time periods and makes all these spaces comparable to each another, actually enabling the analysis of word meaning changes over time by simple vector operations in WordSpaces. The paper is structured as follows: Section 2 provides details about the adopted methodology and the implementation of our framework. Some examples of the potentiality of our framework are reported in Section 3. Lastly, Section 4 closes the paper.
|
We propose a method for building WordSpaces taking into account information about time. In a WordSpace, words are represented as mathemati-cal points and the similarity is computed accord-ing to their closeness. The proposed framework, called TRI , is able to build several WordSpaces in different time periods and to compare vectors across the spaces to understand how the meaning of a word has changed over time. We reported some examples of our framework, which show the potential of our system in capturing word usage changes over time.
| 22
|
Distributional Semantics
|
9_2014
| 2,014
|
Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro
|
Combining Distributional Semantic Models and Sense Distribution for Effective Italian Word Sense Disambiguation
|
ENG
| 3
| 1
| 0
|
Università di Bari Aldo Moro
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Bari
|
Distributional semantics ap-proaches have proven their ability to en-hance the performance of overlap-based Word Sense Disambiguation algorithms. This paper shows the application of such a technique to the Italian language, by analysing the usage of two different Dis-tributional Semantic Models built upon ItWaC and Wikipedia corpora, in con-junction with two different functions for leveraging the sense distributions. Results of the experimental evaluation show that the proposed method outperforms both the most frequent sense baseline and other state-of-the-art systems.
|
Given two words to disambiguate, Lesk (1986) al-gorithm selects those senses which maximise the overlap between their definitions (i.e. glosses), then resulting in a pairwise comparison between all the involved glosses. Since its original formula-tion, several variations of this algorithm have been proposed in an attempt of reducing its complex-ity, like the simplified Lesk (Kilgarriff and Rosen-zweig, 2000; Vasilescu et al., 2004), or maximizing the chance of overlap, like in the adapted version (Banerjee and Pedersen, 2002). One of the limitations of Lesk approach relies on the exact match between words in the sense definitions. Semantic similarity, rather than word overlap, has been proposed as a method to overcome such a limitation. Earlier approaches were based on the notion of semantic relatedness (Patwardhan et al., 2003) and tried to exploit the relationships between synsets in the WordNet graph. More recently, Distributional Semantic Models (DSM) have stood up as a way for computing such semantic similarity. DSM allow the representation of concepts in a geometrical space through word vectors. This kind of representation captures the semantic relatedness that occurs between words in paradigmatic relations, and enables the computation of semantic similarity between whole sentences. Broadening the definition of semantic relatedness, Patwardhan and Pedersen (2006) took into account WordNet contexts: a gloss vector is built for each word sense using its definition and those of related synsets in WordNet. A distributional thesaurus is used for the expansion of both glosses and the context in Miller et al. (2012), where the overlap is computed as in the original Lesk algorithm. More recently, Basile et al. (2014) proposed a variation of Lesk algorithm based on both the simplified and the adapted version. This method combines the enhanced overlap, given by the definitions of related synsets, with the reduced number of matching that are limited to the contextual words in the simplified version. The evaluation was conducted on the SemEval-2013 Multilingual Word Sense Disambiguation task (Navigli et al., 2013), and involved the use of BabelNet (Navigli and Ponzetto, 2012) as sense inventory. While performance for the English task was above the other task participants, the same behaviour was not reported for the Italian language. This paper proposes a deeper investigation of the algorithm described in Basile et al. (2014) for the Italian language. We analyse the effect on the disambiguation performance of the use of two different corpora for building the distributional space. Moreover, we introduce a new sense distribution function (SDfreq), based on synset frequency, and compare its capability in boosting the distributional Lesk algorithm with respect to the one proposed in Basile et al. (2014). The rest of the paper is structured as follows: Section 2 provides details about the Distributional Lesk algorithm and DSM, and defines the two above mentioned sense distribution functions exploited in this work. The evaluation, along with details about the two corpora and how the DSM are built, is presented in Section 3, which is followed by some conclusions about the presented results.
|
This paper proposed an analysis for the Italian language of anenhanced version of Lesk algorithm, which replaces the word overlap with distributional similarity. We analysed two DSM built over the ItWaC and Wikipedia corpus along with two sense distribution functions (SDprob and SDfreq). The sense distribution functions were computed over MultiSemCor, in order to avoid missing references between Italian and English synsets. The combination of the ItWaC-based DSM with the SDprob function resulted in the best overall result for the Italian portion of SemEvalTask-12 dataset.
| 7
|
Lexical and Semantic Resources and Analysis
|
10_2014
| 2,014
|
Valerio Basile
|
A Lesk-inspired Unsupervised Algorithm for Lexical Choice from WordNet Synsets
|
ENG
| 1
| 0
| 0
|
University of Groningen
| 1
| 1
| 1
| 1
|
Valerio Basile
| 0
|
0
|
Netherlands
|
Groningen
|
The generation of text from abstract meaning representations involves, among other tasks, the production of lexical items for the concepts to realize. Using WordNet as a foundational ontology, we exploit its internal network structure to predict the best lemmas for a given synset without the need for annotated data. Experiments based on re-generation and automatic evaluation show that our novel algorithm is more effective than a straightforward frequency-based approach.
|
Many linguists argue that true synonyms don’t exist (Bloomfield, 1933; Bolinger, 1968). Yet, words with similar meanings do exist and they play an important role in language technology where lexical resources such as WordNet (Fellbaum, 1998) employ synsets, sets of synonyms that cluster words with the same or similar meaning. It would be wrong to think that any member of a synset would be an equally good candidate for every application. Consider for instance the synset food, nutrient , a concept whose gloss in WordNet is “any substance that can be metabolized by an animal to give energy and build tissue”. In (1), this needs to be realized as “food”, but in (2) as “nutrient”. 1. It said the loss was significant in a region where fishing provides a vital source of foodnutrient. 2. The Kind-hearted Physician administered a stimulant, a tonic, and a foodnutrient, and went away. A straightforward solution based on n-gram models or grammatical constraint (“a food” is ungrammatical in the example above) is not always applicable, since it would be necessary to generate the complete sentence first, to exploit such features. This problem of lexical choice is what we want to solve in this paper. In a way it can be regarded as the reverse of WordNet-based Word Sense Disambiguation, where instead of determining the right synset for a certain word in a given context, the problem is to decide which word of a synset is the best choice in a given context. Lexical choice is a key task in the larger framework of Natural Language Generation, where an ideal model has to produce varied, naturalsounding utterances. In particular, generation from purely semantic structures, carrying little to no syntactic or lexical information, needs solutions that do not depend on pre-made choices of words to express generic concepts. The input to a lexical choice component in this context is some abstract representation of meaning that may specify to different extent the linguistic features that the expected output should have. Questo e-book appartiene a AlessandroLenci WordNet synsets are good candidate representations of word meanings, as WordNet could be seen as a dictionary, where each synset has its own definition in written English. WordNet synsets are also well suited for lexical choice, because they consist in actual sets of lemmas, considered to be synonyms of each other in specific contexts. Thus, the problem presented here is restricted to the choice of lemmas from WordNet synsets. Despite its importance, the task of lexical choice problem is not broadly considered by the NLGcommunity,oneofthereasonsbeingthatitis hard to evaluate. Information retrieval techniques fail to capture not-so-wrong cases, i.e. when a system produces a different lemma from the gold standard but still appropriate to the context. In this paper we present an unsupervised methodtoproducelemmasfromWordNetsynsets, inspired by the literature on WSD and applicable to every abstract meaning representation that provides links from concepts to WordNet synsets.
|
In this paper we presented an unsupervised algorithm for lexical choice from WordNet synsets called Ksel that exploits the WordNet hierarchy of hypernyms/hyponyms to produce the most appropriate lemma for a given synset. Ksel performs better than an already high baseline based on the frequency of lemmas in an annotated corpus. The future direction of this work is at least twofold. On the one hand, being based purely on a lexical resource, the Ksel approach lends itself nicely to be applied to different languages by leveraging multi-lingual resources like BabelNet (Navigli and Ponzetto, 2012). On the other hand, we want to exploit existing annotated corpora such as the GMB to solve the lexical choice problem in a supervised fashion, that is, ranking candidate lemmas based on features of the semantic structure, in the same track of our previous work on generation from work-aligned logical forms (Basile and Bos, 2013).
| 7
|
Lexical and Semantic Resources and Analysis
|
11_2014
| 2,014
|
Andrea Bellandi, Davide Albanesi, Alessia Bellusci, Andrea Bozzi, Emiliano Giovannetti
|
The Talmud System: a Collaborative Web Application for the Translation of the Babylonian Talmud Into Italian
|
ENG
| 5
| 1
| 0
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
In this paper we introduce the Talmud System, a collaborative web ap-plication for the translation of the Baby-lonian Talmud into Italian. The system we are developing in the context of the “Pro-getto Traduzione del Talmud Babilonese” has been designed to improve the expe-rience of collaborative translation using Computer-Assisted Translation technolo-gies and providing a rich environment for the creation of comments and the annota-tion of text on a linguistic and semantic ba-sis.
|
Alongside the Bible, the Babylonian Talmud (BT) is the Jewish text that has mostly influenced Jew-ish life and thought over the last two millennia. The BT corresponds to the effort of late antique scholars (Amoraim) to provide an exegesis of the Mishnah, an earlier rabbinic legal compilation, di-vided in six “orders” (sedarim) corresponding to different categories of Jewish law, with a total of 63 tractates (massekhtaot). Although following the inner structure of the Mishnah, the BT dis-cusses only 37 tractates, with a total of 2711 dou-ble sided folia in the printed edition (Vilna, XIX century). The BT is a comprehensive literary cre-ation, which went through an intricate process of oral and written transmission, was expanded in every generations before its final redaction, and has been the object of explanatory commentaries and reflexions from the Medieval Era onwards. In its long history of formulation, interpretation, transmission and study, the BT reflects inner de-velopments within the Jewish tradition as well as the interactions between Judaism and the cultures with which the Jews came into contact (Strack and Stemberger, 1996). In the past decades, online resources for studying Rabbinic literature have considerably increased and several digital collec-tions of Talmudic texts and manuscripts are nowa-days available (Lerner, 2010). Particularly, schol-ars as well as a larger public of users can bene-fit from several new computing technologies ap-plied to the research and the study of the BT, such as (i.) HTML (Segal, 2006), (ii.) optical char-acter recognition, (iii.) three-dimensional com-puter graphics (Small, 1999), (iv.) text encod-ing, text and data mining (v.) image recognition (Wolf et al., 2011(a); Wolf et al., 2011(b); Shweka et al., 2013), and (vi.) computer-supported learning environments (Klamma et al., 2005; Klamma et al., 2002). In the context of the “Progetto Traduzione del Talmud Babilonese”, the Institute for Computational Linguistics of the Italian National Research Council (ILC-CNR) is in charge of developing a collaborative Java-EE web application for the translation of the BT into Italian by a team of translators. The Talmud System (TS) already includes Computer-Assisted Translation (CAT), Knowledge Engineering and Digital Philology tools, and, in future versions, will include Natural Language Processing tools for Hebrew/Aramaic, each of which will be outlined in detail in the next Sections.
|
We here introduced the Talmud System, a collaborative web application for the translation of the Babylonian Talmud into Italian integrating technologies belonging to the areas of (i.) ComputerAssisted Translation, (ii.) Digital Philology, (iii.) Knowledge Engineering and (iv.) Natural Language Processing. Through the enhancement of the already integrated components (i., ii., iii.) and the inclusion of new ones (iv.) the TS will allow, in addition to the improvement of the quality and pace of the translation, to provide a multi-layered navigation (linguistic, philological and semantic) of the translated text (Bellandi et al., 2014(c)).
| 7
|
Lexical and Semantic Resources and Analysis
|
12_2014
| 2,014
|
Alessia Bellusci, Andrea Bellandi, Giulia Benotto, Amedeo Cappelli, Emiliano Giovannetti, Simone Marchi
|
Towards a Decision Support System for Text Intepretation
|
ENG
| 6
| 2
| 1
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
This article illustrates the first steps towards the implementation of a De-cision Support System aimed to recreate a research environment for scholars and pro-vide them with computational tools to as-sist in the processing and interpretation of texts. While outlining the general charac-teristics of the system, the paper presen-ts a minimal set of user requirements and provides a possible use case on Dante’s Inferno.
|
A text represents a multifaceted object, resulting from the intersection of different expressive layers (graphemic, phonetic, syntactic, lexico-semantic, ontological, etc.). A text is always created by a writer with a specific attempt to outline a certain subject in a particular way. Even when it is not a literary creation, a given text follows its wri-ter’s specific intention and is written in a distinct form. The text creator’s intention is not always self-evident and, even when it is, a written piece might convey very different meanings proportio-nally to the various readers analysing it. Texts can be seen, in fact, as communication media between writers and readers. Regardless of the epistemolo-gical theory about where meaning emerges in the reader-text relationship (Objectivism, Constructi-vism, Subjectivism), a text needs a reader as much as a writer to be expressive (Chandler, 1995). The reader goes beyond the explicit information given in the text, by making certain inferences and eva-luations, according to his/her background, expe-rience, knowledge and purpose. Therefore, inter-pretation depends on both the nature of the given text and the reader/interpreter; it can be understood as the goal, the process and the outcome of the analytic activity conducted by a certain reader on a given text under specific circumstances. Interpretation corresponds to the different– virtually infinite– mental frameworks and cognitive mechanisms activated in a certain reader/interpreter when examining a given text. The nature of the interpretation of a given text can be philological, historical, psychological, etc.; a psychological interpretation can be Freudian, Jungian, etc... Furthermore, the different categories of literary criticism and the various interpretative approaches might be very much blurred and intertwined, i.e. an historical interpretation might involve philological, anthropological, political and religious analyses. While scholars are generally aware of their mental process of selection and categorization when reading/interpreting a text and, thus, can re-adjust their interpretative approach while they operate, an automatic system has often proved unf it for qualitative analysis due to the complexity of text meaning and text interpretation (Harnad, 1990). Nevertheless, a few semi-automatic systems for qualitative interpretation have been proposed in the last decades. The most outstanding of them is ATLAS.ti, a commercial system for qualitative analysis of unstructured data, which has been applied in the early nineties to text interpretation (Muhr, 1991). ATLAS.ti, however, appears too general to respond to the articulated needs of a scholar studying a text, lacking of advanced text analysis tools and automatic knowledge extraction features. The University of Southampton and Birkbeck University are currently working on a commercial project, SAMTLA1, aimed to create a language-agnostic research environment for studying textual corpora with the aid of computational technologies. In the past, concerning the interpretation of literary texts, the introduction of text annotation approaches and the adoption of high-level markup languages allowed to go beyond the typical use of concordances (DeVuyst, 1990; Sutherland, 1990; Sperberg-Mc Queen and Burnard, 1994). In this context, several works have been proposed for the study of Dante’s Commedia. One of the first works involved the definition of a meta representation of the text of the Inferno and the construction of an ontology formalizing a portion of Dante’s Commedia’s world (Cappelli et al., 2002). Data mining procedures able to conceptually query the aforementioned resources have also been implemented (Baglioni et al., 2004). Among the other works on Dante we cite The World of Dante (Parker, 2001), Digital Dante of the Columbia University (LeLoup and Ponterio, 2006) and the Princeton Dante Project (Hollander, 2013). A “multidimensional” social network of characters, places and events of Dante’s Inferno have been constructed to make evident the innermost structure of the text (Cappelli et al., 2011) by leveraging on the expressive power of graph representations of data (Newman, 2003; Newmanetal., 2006; Easley and Kleinberg, 2010; Meirelles, 2013). A touch table approach to Dante’s Inferno, based on the same social network representation, has been also implemented (Bordin et al., 2013). More recently, a semantic network of Dante’s works has been developed alongside a RDF representation of the knowledge embedded in them (Tavoni et al., 2014). Other works involving text interpretation and graph representations have been carried out on other literary texts, such as Alice in Wonderland (Agarwal et al., 2012) and Promessi Sposi (Bolioli et al., 2013). As discussed by semiologists, linguists and literary scholars (Eco, 1979; Todorov, 1973; Segre, 1985; Roque, 2012) the interpretation of a text may require a complex structuring and interrelation of the information belonging to its different expressive layers. The Decision Support System (DSS) we here introduce aims to assist scholars in their research projects, by providing them with semi-automatic tools specifically developed to support the interpretation of texts at different and combined layers. Wechose to start from the analysis of literary texts to be able to face the most challenging aspects related to text interpretation. This work is the third of a series describing the progressive development of the general approach: for the others refer to (Bellandi et al., 2013; Bellandi et al., 2014). In what follows, we describe the general characteristics of the DSS we plan to develop accompanied by aminimal set of user requirements (2.), we present a possible scenario, in which the system can be applied (3.), and we provide some conclusive notes (4.).
|
In this work, we presented our vision of a Decision Support System for the analysis and interpretation of texts. In addition to outlining the general characteristics of the system, we illustrated a case study on Dante’s Inferno showing how the study of a text can involve elements belonging to three different layers (ontological, dialogical and terminological) thus allowing to take into account, in an innovative way, both textual and contextual elements. The next steps will consist in the extension of the user requirements and the design of the main components of the system. We plan to start with the basic features allowing a user to create a project and upload documents and then provide the minimal text processing tools necessary for the definition and management of (at least) the graphemic layer.
| 5
|
Latin Resources
|
13_2014
| 2,014
|
Luisa Bentivogli, Bernardo Magnini
|
An Italian Dataset of Textual Entailment Graphs for Text Exploration of Customer Interactions
|
ENG
| 2
| 1
| 1
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
This paper reports on the con-struction of a dataset of textual entailment graphs for Italian, derived from a corpus of real customer interactions. Textual en-tailment graphs capture relevant semantic relations among text fragments, including equivalence and entailment, and are pro-posed as an informative and compact rep-resentation for a variety of text exploration applications.
|
Given the large production and availability of tex-tual data in several contexts, there is an increasing need for representations of such data that are able at the same time to convey the relevant informa-tion contained in the data and to allow compact and efficient text exploration. As an example, cus-tomer interaction analytics requires tools that al-low for a fine-grained analysis of the customers’ messages (e.g. complaining about a particular as-pect of a particular service or product) and, at the same time, allow to speed up the search process, which commonly involves a huge amount of in-teractions, on different channels (e.g. telephone calls, emails, posts on social media), and in differ-ent languages. A relevant proposal in this direction has been the definition of textual entailment graphs (Berant et al., 2010), where graph nodes represent predi-cates (e.g. marry(x, y)), and edges represent the entailment relations between pairs of predicates. This recent research line in Computational Lin-guistics capitalizes on results obtained in the last ten years in the field of Recognizing Textual En-tailment (Dagan et al., 2009), where a successful series of shared tasks have been organized to show and evaluate the ability of systems to draw text-to-text semantic inferences. In this paper we present a linguistic resource consisting of a collection of textual entailment graphs derived from real customer interactions in Italian social fora, which is our motivating sce-nario. We extend the earlier, predicate-based, vari-ant of entailment graphs to capture entailment re-lations among more complex text fragments. The resource is meant to be used both for training and evaluating systems that can automatically build entailment graphs from a stream of customer in-teractions. Then, entailment graphs are used to browse large amount of interactions by call cen-ter managers, who can efficiently monitor the main reasons for customers’ calls. We present the methodology for the creation of the dataset as well as statistics about the collected data. This work has been carried out in the context of the EXCITEMENT project1, in which a large European consortium aims at developing a shared software infrastructure for textual inferences, i.e. the EXCITEMENT Open Platform2 (Padó et al., 2014; Magnini et al., 2014), and at experimenting new technology (i.e. entailment graphs) for customer interaction analytics.
|
We have presented a new linguistic resource for Italian, based on textual entailment graphs derived from real customer interactions. We see a twofold role of this resource: (i) on one side it provides empirical evidences of the important role of semantic relations and provides insights for new developments of the textual entailment framework; (ii) on the other side, a corpus of textual entailment graphs is crucial for the realization and evaluation of automatic systems that can build entailment graphs for concrete application scenarios.
| 7
|
Lexical and Semantic Resources and Analysis
|
14_2014
| 2,014
|
Lorenzo Bernardini, Irina Prodanof
|
L'integrazione di informazioni contestuali e linguistiche nel riconoscimento automatico dell'ironia
|
ITA
| 2
| 1
| 0
|
Università di Pavia
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pavia
|
Verbal is a highly complex rhetorical figure that belongs to the pragmatic level of the language. So far, however, all computing attempts aimed at the automatic recognition of irony have limited themselves to searching for linguistic indications that could indicate their presence without considering pragmatic and contextual factors. In this work, we tried to evaluate the possibility of integrating simple computing contextual factors with language-type information in order to improve the effectiveness of automatic irony recognition systems in the comments of online newspapers.
|
Verbal irony is a very complex rhetoric figure that is placed at the pragmatic level of the language. As far as an ironist can use phonological, prosodial, morphological, lessical, synthetic and semantic elements to produce irony, this latter is not an internal property of the enunciated itself and is not determined by its formal characteristics. The irony is rather an interpretative phenomenon linked to the expectations that a listener develops regarding the author’s intentions of an enunciated product in a specific context starting from an extinct set of encyclopedic and contextual information.
|
This work has been presented with the possibility of using contextual information to automatically identify irony in the comments of the usual readers of online newspapers. For this purpose, a possible computing approach was proposed to identify the more ironic commentators of a community, suggesting a different treatment of the linguistic material between them and the other commentators. The integration of contextual information and language information could have a positive impact on the effectiveness of automatic irony recognition systems, which would have an important role in the field of Sentiment Analysis. Currently we are expanding research by evaluating the influence of information such as the type of newspaper, the topic of the news and the length of the comment on a broader and built on multiple newspapers comment corpus. Obviously, an integration of contextual information as basic would not completely solve the problem of how to automatically identify irony in online texts. However, this work reflects the firm belief that these progressive attempts to integrate simple and computable contextual information with linguistic information are today the best way to go to attempt to automatically address phenomena of pragmatic nature as “complex and disfaced” as irony.
| 6
|
Sentiment, Emotion, Irony, Hate
|
15_2014
| 2,014
|
Brigitte Bigi, Caterina Petrone
|
A generic tool for the automatic syllabification of Italian
|
ENG
| 2
| 2
| 1
|
CNRS, Aix-Marseille Université
| 1
| 1
| 1
| 2
|
Brigitte Bigi, Caterina Petrone
| 0
|
0
|
France
|
Marseille
|
This paper presents a rule-based automatic syllabification for Italian. Dif-ferently from previously proposed syllab-ifiers, our approach is more user-friendly since the Python algorithm includes both a Command-Line User and a Graphical User interfaces. Moreover, phonemes, classes and rules are listed in an external configuration file of the tool which can be easily modified by any user. Syllabifica-tion performance is consistent with man-ual annotation. This algorithm is included in SPPAS, a software for automatic speech segmentation, and distributed under the terms of the GPL license.
|
This paper presents an approach to automatic de-tection of syllable boundaries for Italian speech. This syllabifier makes use of the phonetized text. The syllable is credited as a linguistic unit con-ditioning both segmental (e.g., consonant or vowel lengthening) and prosodic phonology (e.g., tune-text association, rhythmical alternations) and its automatic annotation represent a valuable tool for quantitative analyses of large speech data sets. While the phonological structure of the syllable is similar across different languages, phonolog-ical and phonotactic rules of syllabification are language-specific. Automatic approaches to syl-lable detection have thus to incorporate such con-straints to precisely locate syllable boundaries. The question then arises of how to obtain an ac-ceptable syllabification for a particular language and for a specific corpus (a list of words, a writ-ten text or an oral corpus of more or less casual speech). In the state-of-the-art, the syllabification can be made directly from a text file as in (Cioni, 1997), or directly from the speech signal as in (Petrillo and Cutugno, 2003). There are two broad approaches to the prob-lem of the automatic syllabification: a rule-based approach and a data-driven approach. The rule-based method effectively embodies some theoret-ical position regarding the syllable, whereas the data-driven paradigm tries to infer new syllabifica-tions from examples syllabified by human experts. In (Adsett et al., 2009), three rule-based automatic systems and two data-driven automatic systems (Syllabification by Analogy and the Look-Up Pro-cedure) are compared to syllabify a lexicon. Indeed, (Cioni, 1997) proposed an algorithm for the syllabification of written texts in Italian, by syllabifying words directly from a text. It is an algorithm of deterministic type and it is based upon the use of recursion and of binary tree in order to detect the boundaries of the syllables within each word. The outcome of the algorithm is the production of the so-called canonical syllabification (the stream of syllabified words). On the other side, (Petrillo and Cutugno, 2003) presented an algorithm for speech syllabification directly using the audio signal for both English and Italian. The algorithm is based on the detection of the most relevant energy maxima, using two different energy calculations: the former from the original signal, the latter from a low-pass filtered version. This method allows to perform the syllabification with the audio signal only, so without any lexical information. More recently, (Iacoponi and Savy, 2011) developed a complete rule-based syllabifier for Italian (named Sylli) that works on phonemic texts. The rules are then based on phonological principles. The system is composed of two transducers (one for the input and one for the output), the syllabification algorithm and the mapping list (i.e., the vocabulary). The two transducers convert the twodimensional linear input to a three-dimensional phonological form that is necessary for the processing in the phonological module and then sends the phonological form back into a linear string for output printing. The system achieved good performances compared to a manual syllabification: more than 0.98.5% (syllabification of spoken words). This system is distributed as a package written in C language and must be compiled; the program is an interactive test program that is used in command-line mode. After the program reads in the phone set definition and syllable structure parameters, it loops asking for the user to type in a phonetic transcription, calculating syllable boundaries for it, and then displaying them. When the user types in a null string, the cycling stops and execution ends. Finally, there are two main limitations: this tool is only dedicated to computer scientists, and it does not support time-aligned input data. With respect to these already existing approaches and/or systems, the novel aspect of the work reported in this paper is as follows: to propose a generic and easy-to-use tool to identify syllabic segments from phonemes; to propose a generic algorithm, then a set of rules for the particular context of Italian spontaneous speech. In this context, ”generic” means that the phone set, the classes and the rules are easily changeable; and ”easy-to-use” means that the system can be used by any user.
|
The paper presented a new feature of the SPPAS tool that lets the user provide syllabification rules and perform automatic segmentation by means of a well-designed graphical user interface. The sys-tem is mainly dedicated to linguists that would like to design and test their own set of rules. A man-ual verification of the output of the program con-firmed the accuracy of the proposed set of rules for syllabification of dialogues. Furthermore, the rules or the list of phonemes can be easily modi-fied by any user. Possible uses of the program in-clude speech corpus syllabification, dictionary syl-labification, and quantitative syllable analysis.
| 13
|
Multimodal
|
16_2014
| 2,014
|
Andrea Bolioli, Eleonora Marchioni, Raffaella Ventaglio
|
Errori di OCR e riconoscimento di entità nell'Archivio Storico de La Stampa
|
ITA
| 3
| 2
| 0
|
CELI Language Technology
| 1
| 0
| 0
| 0
|
0
| 3
|
Andrea Bolioli, Eleonora Marchioni, Raffaella Ventaglio
|
Italy
|
Turin
|
In this article we present the project of recognition of entity mentions carried out for the Historic Archive of the Press and a brief analysis of the OCR errors found in the documents. The automatic recording was carried out on about 5 million articles, in editions from 1910 to 2005.
|
In this article we will synthesically describe the project of automatic notification of entity references carried out on the documents of the Historic Archive of the Press, i.e. the au-tomatic recognition of the references of people, entities and organizations (the Ónamed entitiesÓ) carried out on about 5 million articles of the quo-day, followed by the wider project of digitalization of the Historic Archive. 1 Although the project dates back to a few years ago (2011), we think it may be of interest in 1As you read on the website of the Historical Archive (www.archiviolastampa.it), ÓThe project of digitalization of the Historical Archive The Press was carried out by the Comi-tato for the Library of Journalistic Information (CB-DIG) promoted by the Piemonte Region, the San Paolo Company, the CRT Foundation and the publisher The Press, with the aim of creating an online database intended for public consultation and accessible for free. It was the first project of digitalization of the entire historic archive of an Italian-Ian newspaper, and one of the first international projects of an-notation automatic of an entire archive. In 2008, the New York Times had released a recorded cor-pus containing about 1.8 million articles from 1987 to 2007 (New York Times Annotated Corpus, 2008), which had been manually recorded people, organizations, places and other relevant information using con-trolled vocabulary. The Historic Archive of the Press contains a total of 1,761,000 digitalized pages, for a total of over 12 million articles, of various publications (The Press, Press Sera, Tuttolibri, Tuttoscienze, etc.From 1867 to 2005. The automatic recognition of the entity is limited to the articles of the La Prampa successive to 1910, identified as such by the presence of a title, i.e. approximately 4,800,000 documents. The announcement of the mentions in the articles enables us to analyze the co-need between the entity and other linguistic data, their time movements, and the generation of infographics, which we cannot deepen in this article. In Figure 1, we only show as an example the chart of the people most cited in the newspaper articles over the decades. In the rest of the article we briefly present an analysis of the OCR errors present in transcriptions, before describing the procedures taken for the automatic recognition of the mentions and the results obtained.
|
In this short article we mentioned some of the methodologies and issues of the automatic notification project of 5 million articles of the Historic Archive of the Press. We have some difficulties related to the presence of considerable OCR errors and the width and variety of the archive (the entire archive goes from 1867 to 2005). These problems could-be addressed positively using information and methodologies that we have little to experience in this project, such as at. The crowdsourcing.ori
| 7
|
Lexical and Semantic Resources and Analysis
|
17_2014
| 2,014
|
Federico Boschetti, Riccardo Del Gratta, Marion Lamé
|
Computer Assisted Annotation of Themes and Motifs in Ancient Greek Epigrams: First Steps
|
ENG
| 3
| 1
| 0
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
This paper aims at illustrating some tools to assist the manual annotation of themes and motifs in literary and epi-graphic epigrams for the PRIN 2010/2011 Memorata Poetis Project.
|
The Memorata Poetis Project is a national funded project (PRIN 2010/2011), led by Professor Paolo Mastandrea, “Ca’ Foscari” University of Venice, in continuity with the Musisque Deoque Project (Mastandrea and Spinazzè, 2011). It aims at the study of the intertextuality between epigraphic and literary epigrams in Greek, Latin, Arabic and Ital-ian languages. Some of those epigrams are trans-lated in more languages. Currently the access to the website (http://memoratapoetis.it) is restricted to the project workgroups but the access will be public before the end of the project, i.e. February 2016. To understand the specific goal of this work in progress, a broader presentation of the project is necessary. Epigrams are short poems and follow specific schemes, contents and structures. Those short poems are transmitted both by epigraphs and by manuscripts, with interesting relations between the different traditions: an epigram can have been copied from stone to parchment, losing its original function and contextualization or, on the contrary, a literary epigram can have been adapted to a new epigraphic situation. As inscription, epigrams are a communication device inserted in a cultural con-struct. They are part of an information system and this implies, in addition to texts and their linguis-tics aspects: writings, contexts and iconotextual relationships. This holistic and systemic construc-tion creates meanings: in Antiquity and in Middle-Ages, for instance, epigrams, as inscriptions, were often epitaphs. Intertextuality also takes into account this rela-tion between images of the context and the epi-grams. For instance, “fountain” is a redundant motive in epigrams. An epigram that refers to divinities of water could be inscribed on a foun-tain. Such epigraphic situation participates to the global meaning. It helps to study the original audience and the transmission of epigrams. The reuse of themes and motifs illustrates how au-thors work and may influence other authors. From epigraphs to modern edition of epigrams, intertex-tuality draws the movement of languages and con-cepts across the history of epigrams. Here is an example of a poetic English translation of a Theocritus’ epigram: XV. [For a Tripod Erected by Damoteles to Bacchus] The precentor Damoteles, Bacchus, exalts / Your tripod, and, sweetest of deities, you. / He was champion of men, if his boyhood had faults; / And he ever loved honour and seemliness too. (transl. by Calverly, 1892, https: //archive.org/details/Theocritus/ TranslatedIntoEnglishVerseByC.s. Calverley) Effectively, European cultures enjoyed epigrams since the Antiquity, copied them, translated them, and epigrams became a genre that philology studies ardently. This intercultural process transforms epigrams and, at the same time, tries to keep their essence identifiable in those themes and motifs. Naturally, those themes and motifs, such as “braveness”, “pain”, “love” or more concretely “rose”, “shield”, “bee” are reflecting the concepts in use in several different languages. The Memorata Poetis Project tries to capture metrical, lexical and semantic relations among the document of this heterogeneous multilingual corpus. The study of intertextuality is important to understand the transmission of knowledge from author to author, from epoch to epoch, or from civilization to civilization. Even if the mechanisms of the transmission are not explicit, traces can be found through allusions, or thematic similarities. If the same themes are expressed through the same motif(s), probably there is a relation between the civilizations, which express this concept in a literary form, independently by the language in which it is expressed. For instance, the concept of the shortness of life and the necessity to enjoy this short time is expressed both in Greek and Latin literature: Anthologia Graeca 11, 56 PØne kaÈ eÎfraÐnou. tÐ gr aÖrion « tÐ tä mèllon, / oÎdeÈc gin¸skei. (transl.: Drink and be happy. Nobody knows how will be tomorrow or the future.) Catullus, carmina, 5 Viuamus, mea Lesbia, atque amemus / ... / Nobis cum semel occidit breuis lux, / Nox est perpetua una dormienda. (transl.: Let us live and love, my Lesbia [...] when our short light has set, we have to sleep a never ending night.) Whereas other units are working on Greek, Latin, and Italian texts, the ILC-CNR unit of the project currently has in charge the semantic annotation of a small part of the Greek and of all the Arabic texts and it is developing computational tools to assist the manual annotation, in order to suggest the most suitable tags that identify themes and motifs. The semantic annotation of literary and historical texts in collaborative environments is a relevant topic in the age of the Semantic Web. At least two approaches are possible: a top-down approach, in which an ontology or a predefined taxonomy is used for the annotation, and a bottomup approach, in which the text can be annotated with unstructured tags that will be organized in a second stage of the work. By combining these approaches, it is possible to collect more evidence to establish agreement on the annotated texts.
|
In conclusion, we have presented a work in progress related to the lexico-semantic instru-ments under development at the ILC-CNR to as-sist the annotators that collaborate to the Memo-rata Poetis Project.
| 5
|
Latin Resources
|
18_2014
| 2,014
|
Dominique Brunato, Felice Dell'orletta, Giulia Venturi, Simonetta Montemagni
|
Defining an annotation scheme with a view to automatic text simplification
|
ENG
| 4
| 3
| 1
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
This paper presents the preliminary steps of ongoing research in the field of au- tomatic text simplification. In line with cur- rent approaches, we propose here a new an- notation scheme specifically conceived to identify the typologies of changes an original sentence undergoes when it is manually sim- plified. Such a scheme has been tested on a parallel corpus available for Italian, which we have first aligned at sentence level and then annotated with simplification rules.
|
Automatic Text Simplification (ATS) as a field of research in NLP is receiving growing attention over the last few years due to the implications it has for both machine- and human-oriented tasks. For what concerns the former, ATS has been employed as a pre-processing step, which pro- vides an input that is easier to be analyzed by NLP modules, so that to improve the efficiency of, e.g., parsing, machine translation and infor- mation extraction. For what concerns the latter, ATS can also play a crucial role in educational and assistive technologies; e.g., it is used for the creation of texts adapted to the needs of particular readers, like children (De Belder and Moens, 2010), L2 learners (Petersen and Ostendorf, 2007), people with low literacy skills (Aluìsio et al., 2008), cognitive disabilities (Bott and Saggion, 2014) or language impairments, such as aphasia (Carroll et al., 1998) or deafness (Inui et al., 2003). From the methodological point of view, while the first attempts were mainly developed on a set of predefined rules based on linguistic intuitions (Chandrasekar et al., 1996; Siddharthan, 2002), current ones are much more prone to adopt datadriven approaches. Within the latter paradigm, the availability of monolingual parallel corpora (i.e. corpora of authentic texts and their manually simplified versions) turned out to be a necessary prerequisite, as they allow for investigating the actual editing operations human experts perform on a text in the attempt to make it more comprehensible for their target readership. This is the case of Brouwers et al. (2014) for French; Bott and Saggion (2014) for Spanish; Klerke and Søgaard (2012) for Danish and Caseli et al. (2009) for Brazilian Portuguese. To our knowledge, only a parallel corpus exists for Italian which was developed within the EU project Terence, aimed at the creation of suitable reading materials for poor comprehenders (both hearing and deaf, aged 7-11)1. An excerpt of this corpus was used for testing purposes by Barlacchi and Tonelli (2013), who devised the first rule-based system for ATS in Italian focusing on a limited set of linguistic structures. The approach proposed in this paper is inspired to the recent work of Bott and Saggion (2014) for Spanish and differs from the work of Barlacchi and Tonelli (2013) since it aims at learning from a parallel corpus the variety of text adaptations that characterize manual simplification. In particular, we focus on the design and development of a new annotation scheme for the Italian language intended to cover a wide set of linguistic phenomena implied in text simplification.
|
We have illustrated the first annotation scheme for Italian that includes a wide set of simplification rules spanning across different levels of linguistic description. The scheme was used to annotate the only existing Italian parallel corpus. We believe such a resource will give valuable insights into human text simplification and create the prerequisites for automatic text simplification. Current developments are devoted to refine the annotation scheme, on the basis of a qualitative and quantitative analysis of the annotation results; we are also testing the suitability of the annotation scheme with respect to other corpora we are also gathering in a parallel fashion. Based on the statistical findings on the productivity of each rule, we will investigate whether and in which way certain combinations of rules affect the distribution of multi-leveled linguistic features between the original and the simplified texts. In addition, we intend to explore the relation between text simplification and a related task, i.e. readability assessment, with the aim of comparing the effects of such combinations of rules on the readability scores.
| 11
|
Text Simplification
|
19_2014
| 2,014
|
Tommaso Caselli, Isabella Chiari, Aldo Gangemi, Elisabetta Jezek, Alessandro Oltramari, Guido Vetere, Laure Vieu, Fabio Massimo Zanzotto
|
Senso Comune as a Knowledge Base of Italian language: the Resource and its Development
|
ENG
| 8
| 3
| 0
|
VU Amsterdam, Sapienza Università di Roma, CNR-ISTC, Università di Pavia, Carnegie Mellon University, IBM Italia, CNRS, Università di Roma Tor Vergata
| 8
| 1
| 1
| 3
|
Tommaso Caselli, Alessandro Oltramari, Laure Vieu
| 1
|
Guido Vetere
|
Netherlands, Pennsylvania (USA)
|
Amsterdam, Pittsburgh
|
Senso Comune is a linguistic knowledge base for the Italian Language, which accommodates the content of a legacy dictionary in a rich formal model. The model is implemented in a platform which allows a community of contributors to enrich the resource. We provide here an overview of the main project features, including the lexical-ontology model, the process of sense classification, and the an-notation of meaning definitions (glosses) and lexicographic examples. Also, we will illustrate the latest work of alignment with MultiWordNet, to illustrate the method-ologies that have been experimented with, to share some preliminary result, and to highlight some remarkable findings about the semantic coverage of the two re-sources.
|
Senso Comune1 is an open, machine-readable knowledge base of the Italian language. The lex-ical content has been extracted from a monolin-gual Italian dictionary2, and is continuously en-riched through a collaborative online platform. The knowledge base is freely distributed. Senso Comune linguistic knowledge consists in a struc-tured lexicographic model, where senses can be qualified with respect to a small set of ontologi-cal categories. Senso Comune’s senses can be fur-ther enriched in many ways and mapped to other dictionaries, such as the Italian version of Mul-tiWordnet, thus qualifying as a linguistic Linked Open Data resource. 1.1 General principles The Senso Comune initiative embraces a num-ber of basic principles. First of all, in the era of user generated content, lexicography should be able to build on the direct witness of native speak-ers. Thus, the project views at linguistic knowl-edge acquisition in a way that goes beyond the ex-ploitation of textual sources. Another important assumption is about the relationship between lan-guage and ontology (sec. 2.1). The correspon-dence between linguistic meanings, as they are listed in dictionaries, and ontological categories, is not direct (if any), but rather tangential. Lin-guistic senses commit to the existence of various kinds of entities, but should not be in general con-fused with (and collapsed to) logical predicates directly interpretable on these entities. Finally, we believe that, like the language itself, linguistic knowledge should be owned by the entire commu-nity of speakers, thus they are committed to keep the resource open and fully available.
|
In this paper, we have introduced Senso Comune as an open cooperative knowledge base of Italian language, and discussed the issue of its alignment with other linguistic resources, such as WordNet. Experiments of automatic and manual alignment with the Italian MultiWordNet have shown that the gap between a native Italian dictionary and a WordNet-based linguistic resource may be rele-vant, both in terms of coverage and granularity. While this finding is in line with classic semiology (e.g. De Saussure’s principle of arbitrariness), it suggests that more attention should be paid to the semantic peculiarity of each language, i.e. the specific way each language constructs a concep-tual view of the World. One of the major features of Senso Comune is the way linguistic senses and ontological concepts are put into relation. Instead of equalising senses to concepts, a formal relation of ontological commitment is adopted, which weakens the ontological import of the lexicon. Part of our future research will be dedicated to leverage on this as an enabling feature for the integration of different lexical resources, both across and within national languages.
| 7
|
Lexical and Semantic Resources and Analysis
|
20_2014
| 2,014
|
Fabio Celli, Giuseppe Riccardi
|
CorEA: Italian News Corpus with Emotions and Agreement
|
ENG
| 2
| 0
| 0
|
Università di Trento
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
In this paper, we describe an Italian corpus of news blogs, including bloggers’ emotion tags, and annotations of agreement relations amongst blogger-comment pairs. The main contributions of this work are: the formalization of the agreement relation, the design of guide-lines for its annotation, the quantitative analysis of the annotators’ agreement.
|
Online news media, such as journals and blogs, allow people to comment news articles, to express their own opinions and to debate about a wide va-riety of different topics, from politics to gossips. In this scenario, commenters express approval and dislike about topics, other users and articles, ei-ther in a linguistic form and/or using like pre-coded actions (e.g. like buttons). Corriere is one of the most visited Italian news websites, attract-ing over 1.6 million readers everyday1. The pe-culiarity of corriere.it with respect to most news websites, is that it contains metadata on emotions expressed by the readers about the articles. The emotions (amused, satisfied, sad, preoccupied and 1source ’http://en.wikipedia.org/wiki/Corriere della Sera’ retrieved in Jan 2014.indignated) are annotated directly by the readers on a voluntary basis. They can express one emo-tion per article. In this paper, we describe the col-lection of a corpus from corriere.it, that combines emotions and agreement/disagreement. The paper is structured as follows: in section 2 we will provide an overview of related work, in sections 3 and 4 we will define the agree-ment/disagreement relation, describe the corpus, comparing it to related work, and provide the an-notation guidelines. In section 5 we will draw some conclusions.
|
We presented the CorEA corpus, a resource that combines agreement/disagreement at message level and emotions at participant level. We are not aware of any other resource of this type for Ital-ian. We found that the best way to annotate agree-ment/disagreement is with binary classes, filtering out “NA” and neutral cases. In the future, we would like to annotate CorEA at topic level and develop classifiers for agree-ment/disagreement. We plan to make available the corpus at the end of the project.
| 6
|
Sentiment, Emotion, Irony, Hate
|
21_2014
| 2,014
|
Alessandra Cervone, Peter Bell, Silvia Pareti, Irina Prodanof, Tommaso Caselli
|
Detecting Attribution Relations in Speech: a Corpus Study
|
ENG
| 5
| 3
| 1
|
Università di Pavia, University of Edinburgh, Google Inc., Trento RISE
| 4
| 1
| 1
| 2
|
Peter Bell, Silvia Pareti
| 1
|
Silvia Pareti
|
United Kingdom, California (USA), Italy
|
Edinburgh, Mountain View, Pavia, Trento
|
In this work we present a methodology for the annotation of Attri-bution Relations (ARs) in speech which we apply to create a pilot corpus of spo-ken informal dialogues. This represents the first step towards the creation of a re-source for the analysis of ARs in speech and the development of automatic extrac-tion systems. Despite its relevance for speech recognition systems and spoken language understanding, the relation hold-ing between quotations and opinions and their source has been studied and extracted only in written corpora, characterized by a formal register (news, literature, scientific articles). The shift to the informal register and to a spoken corpus widens our view of this relation and poses new challenges. Our hypothesis is that the decreased relia-bility of the linguistic cues found for writ-ten corpora in the fragmented structure of speech could be overcome by including prosodic clues in the system. The analysis of SARC confirms the hypothesis show-ing the crucial role played by the acous-tic level in providing the missing lexical clues.
|
1 Introduction Our everyday conversations are populated by other people’s words, thoughts and opinions. Detect-ing quotations in speech represents the key to “one of the most widespread and fundamental topics of human speech” (Bakhtin, 1981, p. 337). A system able to automatically extract a quo-tation and attribute it to its truthful author from speech would be crucial for many applications. Besides Information Extraction systems aimed at processing spoken documents, it could be useful for Speaker Identification systems, (e.g. the strat-egy of emulating the voice of the reported speaker in quotations could be misunderstood by the sys-tem as a change of speaker). Furthermore, attri-bution extraction could also improve the perfor-mance of Dialogue parsing, Named-Entity Recog-nition and Speech Synthesis tools. On a more ba-sic level, recognizing citations from speech could be useful for sentence boundaries automatic detec-tion systems, where quotations, being sentences embedded in other sentences, could be a source of confusion. So far, however, attribution extraction systems have been developed only for written corpora. Extracting the text span corresponding to quotations and opinions and ascribing it to their proper source within a text means to reconstruct the Attribution Relations (ARs, henceforth) holding between three constitutive elements (following Pareti (2012)): the Source the Cue, i.e. the lexical anchor of the AR (e.g. say, announce, idea) the Content (1) This morning [Source John] [Cue told] me: [Content ”It’s important to support our leader. I trust him.”]. In the past few years ARs extraction has attracted growing attention in NLP for its many potential applications (e.g.Information Extraction, Opinion Mining) while remaining an open challenge. Automatically identifying ARs from a text is a complex task, in particular due to the wide range of syntactic structures that the relation can assume and the lack of a dedicated encoding in the language. While the content boundaries of a direct quotation are explicitly marked by quotation markers, opinions and indirect quotations only partially have syntactically, albeit blurred, boundaries as they can span intersententially. The subtask of identifying the presence of an AR could be tackled with more success by exploiting the presence of the cue as a lexical anchor establishing the links to source and content spans. For this reason, cues are the starting point or a fundamental feature of extraction systems (Pareti et al., 2013; Sarmento and Nunes, 2009; Krestel, 2007). In our previous work (Pareti and Prodanof, 2010; Pareti, 2012), starting from a flexible and comprehensive definition (Pareti and Prodanof, 2010, p. 3566) of AR, we created an annotation scheme which has been used to build the first large annotated resource for attribution, the Penn Attribution Relations Corpus (PARC)1, a corpus of news articles. In order to address the issue of detecting ARs in speech, we started from the theoretical and annotation framework from PARC to create a comparable resource. Section 2 explains the issues connected with extracting ARs from speech. Section 3 describes the Speech Attribution Relations Corpus (SARC, henceforth) and its annotation scheme. The analysis of the corpus is presented in Section 4. Section 5 reports an example of how prosodic cues can be crucial to identify ARs in speech. Finally, Section 6 draws on the conclusions and discusses future work.
|
6 Conclusions and future work The analysis of SARC, the first resource devel-oped to study ARs in speech, has helped to high-light a major problem of detecting attribution in a spoken corpus: the decreased reliability of the lexical cues crucial in previous approaches (com-pletely useless in at least 10% of the cases) and the consequential need to find reliable prosodic clues to integrate them. The example provided in Section 5 has showed how the integration of the acoustic cues could be useful to improve the accu-racy of attribution detection in speech. As a future project we are going to perform a large acoustic analysis of the ARs found in SARC, in order to see if some reliable prosodic cues can in fact be found and used in order to develop a software able to extract attribution from speech.
| 13
|
Multimodal
|
22_2014
| 2,014
|
Mauro Cettolo, Nicola Bertoldi, Marcello Federico
|
Adattamento al Progetto dei Modelli di Traduzione Automatica nella Traduzione Assistita
|
ITA
| 3
| 0
| 0
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
The integration of automatic translation into axi-stite translation systems is a challenge for both aqua-demic and industrial research. Professional translators perceive the ability of self-mathic systems to adapt to their style and corrections as crucial. In this article I proposed a scheme of adaptation of automatic translation sides to a specific document based on a limited amount of text, manually corrected, equal to that produced daily by a single translator.
|
Despite the significant and continued progress, the automatic translation (TA) is not yet able to generate text suitable for publication without human intervention. On the other hand, many studies have confirmed that in the context of assisted translation the correction of translated texts automatically enables an increase in the productivity of professional translators (see paragraph 2). This application of the TA is as more effective as greater and the integration of the automatic translation system into the entire translation process, which can be achieved by specializing the system both to the specific text to be translated and to the characteristics of the specific translator and its corrections. In the translation industry, the typical scenario is that of one or more translators who work for a few days on a given translation project, or on a set of homogeneous documents. After a working day, the information contained in the newly translated texts and the corrections made by the translators can be entered into the automatic system with the aim of improving the quality of the automatic translations proposed the next day. We will call this process adaptation to the project. The adaptation to the project can be repeated daily until the end of the work, so that all the information that implicitly translators put at the disposal of the system can be best exploited. This article presents one of the results of the European MateCat project,1 in which we have developed a web-based assisted translation system that integrates a TA module that is self-adjusted to the specific project. The validation experiments we will illustrate have been conducted on four languages, from English to Italian (IT), French (FR), Spanish (ES) and German (DE), and in two fields, Information and Communication Technologies (ICT) and Legal (LGL). Ideally, the proposed adaptation methods should be evaluated by measuring the profit in terms of productivity on real translation projects. Therefore, as far as possible, we have conducted assessments on the field where professional translators have corrected the hypothesized translations by automatic, adjusted and non-adjusted systems. The adjustment was carried out on the basis of a part of the project translated during a preliminary phase, in which the same translator was asked to correct the translations provided by an unadjusted starting system. Since field assessments are extremely expensive, they cannot be performed frequently to compare all possible variants of algorithms and processes. We also conducted laboratory assessments, where the corrections of translators were simulated by reference translations. In general, in the legal field the improvements observed in the laboratory anticipated those measured in the field. On the contrary, the results in the ICT domain were controversial due to the little correspondence between the texts used for adaptation and those actually translated during the experiment.
|
A current research topic for the assisted translation industry is how to do-to automatic translation systems of the layer-city of self-adaptation. In this work I have presented a scheme of self-adaptation and the results of its validation not only in lab outlets but also on the field, with coin-wishing of professional translators, thanks to the collaboration with the industrial partner of MateCat. The experimental results confirmed the impact of our proposal, with gauge-day productivity up to 43%. However, the me-all works only if the texts used as the basis for the selection of specific data on which to perform the a-date is representative of the document you want to translate. In fact, if such a condition was not verified, as it was in our English-French/ICT experiments, the adapted models may be unable to improve the starting ones; in any case even in these critical conditions we have noticed any deterioration of performance, demonstrating the conservative behavior of our scheme.
| 10
|
Machine Translation
|
23_2014
| 2,014
|
Isabella Chiari, Tullio De Mauro
|
The New Basic Vocabulary of Italian as a linguistic resource
|
ENG
| 2
| 1
| 1
|
Sapienza Università di Roma
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome
|
The New Basic Vocabulary of Italian (NVdB) is a reference linguistic resource for contemporary Italian de-scribing most used and understood words of the language. The paper offers an overview of the objectives of the work, its main features and most relevant lin-guistic and computational applications.
|
1 Introduction Core dictionaries are precious resources that rep-resent the most widely known (in production and reception) lexemes of a language. Among the most significant features characterizing basic vocabulary of a language is the high textual cov-erage of a small number of lexemes (ranging from 2,000 to 5,000 top ranking words in fre-quency lists), their large polysemy, their relation-ship to the oldest lexical heritage of a language, their relevance in fist and second language learn-ing and teaching and as reference tools for lexi-cal analysis. Many recent corpus based works have been produced to provide up-to-date core dictionaries to many European languages (e.g. the Routledge frequency dictionary series). Italian language has a number reference frequency lists all of which are related to corpora and collections of texts dating 1994 or earlier (among the most relevant Bortolini et al., 1971; Juilland and Traversa, 1973; De Mauro et al., 1993; Bertinetto et al. 2005). The Basic Vocabulary of Italian (VdB, De Mauro, 1980) first appeared as an annex to Guida all’uso delle parole and has been subsequently included in all lexicographic works directed by Tullio De Mauro, with some minor changes. VdB has benefited from a combination of statistical criteria for the selection of lemmas (both grammatical and content words) mainly based on a frequency list of written Italian, LIF (Bortolini et al., 1972) and later on a frequency list of spoken Italian, LIP (De Mauro et al., 1993) – and independent evaluations further submitted to experimentation on primary school pupils. The last version of VdB was published in 2007 in an additional tome of GRADIT (De Mauro, 1999) and counts about 6,700 lemmas, organised in three vocabulary ranges. Fundamental vocabulary (FO) includes the highest frequency words that cover about 90% of all written and spoken text occurrences [appartamento ‘apartment’, commercio ‘commerce’, cosa ‘thing’, fiore ‘flower’, improvviso ‘sudden’, incontro ‘meeting’, malato ‘ill’, odiare ‘to hate’], while high usage vocabulary (AU) covers about 6% of the subsequent high frequency words [acciaio ‘steel’, concerto ‘concert’, fase ‘phase’, formica ‘ant’, inaugurazione ‘inauguration’, indovinare ‘to guess’, parroco ‘parish priest’, pettinare ‘to comb’]. On the contrary high availability (AD) vocabulary is not based on textual statistical resources but is derived from a psycholinguistic insight experimentally verified, and is to be intended in the tradition of the vocabulaire de haute disponibilité, first introduced in the Français fondamentale project (Michéa, 1953; Gougenheim, 1964). VdB thus integrates high frequency vocabulary ranges with the socalled high availability vocabulary (haute disponibilité) and thus provides a full picture of not only written and spoken usages, but also purely mental usages of word (commonly regarding words having a specific relationship with the concreteness of ordinary life) [abbaiare to ‘bark’, ago ‘needle’, forchetta ‘fork’, mancino ‘left-handed’, pala ‘shovel’, pescatore ‘fisherman’]. From the first edition of VdB many things have changed in Italian society and language: Italian language was then used only by 50% of the population. Today Italian is used by 95% of the population. Many things have changed in the conditions of use of the language for the speakers and the relationship between Italian language and dialects have been deeply transformed. The renovated version of VdB, NVdB (Chiari and De Mauro, in press), will be presented and previewed in this paper. NVdB is a linguistic resource designed to meet three different purposes: a linguistic one, to be intended in both a theoretical and a descriptive sense, an educationallinguistic one and a regulative one, for the development of guidelines in public communication. The educational objective is focused on providing a resource to develop tools for language teaching and learning, both for first and second language learners. The descriptive lexicological objective is providing a lexical resource that can be used as a reference in evaluating behaviour of lexemes belonging to different text typologies, taking into account the behaviour of different lexemes both from an empirical-corpus based approach and an experimental (intuition based) approach and enable the description of linguistic changes that affected most commonly known words in Italian from the Fifties up to today. The descriptive objective is tightly connected to the possible computational applications of the resource in tools able to process general language and take into account its peculiar behaviour. The regulative objective regards the use of VdB as a reference for the editing of administrative texts, and in general, for easy reading texts.
|
4 Conclusion and future developments The NVdB of Italian is be distributed as a fre-quency dictionary of lemmatized lexemes and multiword, with data on coverage, frequency, dispersion, usage labels, grammatical qualifica-tions in all subcorpora. A linguistic analysis and comparison with previous data is also provided with full methodological documentation. The core dictionary and data are also distribut-ed electronically in various formats in order to be used as a reference tool for different applications. Future work will be to integrate data from the core dictionary with new lexicographic entries (glosses, examples, collocations) in order to pro-vide a tool useful both for first and second lan-guage learners and for further computational ap-plications.
| 7
|
Lexical and Semantic Resources and Analysis
|
24_2014
| 2,014
|
Francesca Chiusaroli
|
Sintassi e semantica dell'hashtag: studio preliminare di una forma di Scritture Brevi
|
ITA
| 1
| 1
| 1
|
Università di Macerata
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Macerata
|
The contribution presents a line-guistic analysis of the hashtag category in Twit-ter, in particular with regard to the Italian-the forms, in order to observe the morphotatic characteristics in the body of the text and also the semantic potential for the possible interpretation of the shapes in the rate-nomic key. The research path will be articulated within the theoretical horizon defined by the concept of Short Scripture as it is found ela-borate in Chiusaroli and Zanzotto 2012a and 2012b and now at www.scritturebrevi.it
|
In the definition of the so-called Egergo of Twit-terÓ, the hashtag category is placed right, for the typical difficulties of the imme-diata and practical readability of the tweet posted in the form preceded by the cancellation. Particularly the presence of the hashtag, along with the need of the account (user address preceded by the chiocciola), recognize the artifact structure of the text of Twitter compared to the ordinary and conventional writing, since the phrase string is concretely altered by such figures traditionally not contemplated in the ortographic rules of the standard language. Cryptocurrency is confirmed by the easy experiment of transferring a tweet containing hashtags and accounts out of the Twitter environment, where the failure to integrate the forms is immediately perceived and the process of reading and understanding is also substantially purchased. This difficulty, encountered by the neofite of the medium, appears in fact overcomeable with practice, while some special features of the post-sono hashtag cause problems with regard to the formal, or automatic decades of texts. This contribution aims to provide a description of the language properties of the hashtag, which is the most peculiar te-stual element of Twitter (Chiusaroli, 2014), with part-lare regarding the expressions in Italian. The con-sideration of the grammar values and semantic functions enables to delineate the rules of reading of the text, as well as to assess the relevance and necessity, for the analysis, of an interpre-taction in taxonomic key useful for the sistematic classification of this recent form of today's web language, im-porting today among the phenomena of the writing of the network (Pistolesi, 2014; Antonelli, 2007; Maraschio and De Martino, 2010; Tavosanis, 2011). The investigation path sees the hashtag come back to the definition of Short Writings as it is elaborated in Chiusaroli and Zanzotto 2012a and 2012b and now in www.scritturebrevi.it: It is the label Short Writings is proposed as a conceptual and metallingual category for the classification of graphic forms such as abbreviations, acronyms, signs, icons, indices and symbols, figurative elements, text expressions and visual codes for which it results in directing the principle of Ôbrevity’ connected to the criterion of the Oeconomy’. In particular, all the graphic manifestations that, in the syntagmatic dimension, subordinate to the principle of linearity of the meaning, alter the conventional morphotatic rules of the written language, and intervene in the construction of the message in the terms of Oriduction, content, synthesis’ inducted by the supporters and contexts. The category is applied in synchronic and linguistic diacronic, in standard and non-standard systems, in general and specialized areas.Ó The analysis will also take place of the Twitter experience maturated with accounts @FChiusaroli and hashtag #scritturebrevi (from December 26, 2012) and other related hashtags (now elaborated and/or discussed at www.scritturebrevi.it).
|
The need to consider the elements with hashtags for their value both formal and semantic confirms itself indispensable for a proper assessment of language products (Cann, 1993, and, for the basis, Fillmore, 1976; Lyons, 1977; Church and McConnell Ginet, 1990), in particular, but not only, to be able to judge the real and concrete impact of the phenomenon of the writing of the network on the forms and uses, even in the wider perspective of the diacronic change (Simone, 1993). Where the hashtag is an important isolated element and as such a ability to gather content and ideas, it appears incomplete any analysis that does not take into account the belonging of the hashtag to multiple categories of the language, from the common name simple or composite, to its own, simple or composite, to the symmetic and phrasal connection, with natural consequences of morphosynthetic treatment (Grossmann, 2004; Doleschal and Thornton, 2000; Recanati, 2011). An appropriate analysis can also not be deprived of a semantic classification in the hierarchical and taxonomic sense of voices (Cardona, 1980, 1985a, 1985b), i.e. taking into account the degrees of the relationships between the elements, the relationships of synonymous or of hyperonymous and hyponymous (Basile, 2005 and Jezek, 2005), and also the only formal, homographic and homonymous relationships (Pazienza, 1999; Nakagawa and Mori, 2003; Pazienza and Pennacchiotti and Zanzotto, 2005), and finally the references in terms of correspondence in other languages (Smadja, McKeown and Hatzivassiloglou, 1996), in particular the English, for its role of network driver (Crystal, 2003). If it is true that the network and knowledge in the network are formed according to procedures not more linear or monodimensional, but" with progress in depth and by layers (Eco, 2007), it seems indispensable to insert in the horizon of analysis, in addition to the formal, numerical and quantitative element, also the assessment of the semantic and prototipic structure through the reconstruction of the minimum elements or ÈprimiÓ of knowledge, a method well-known in the history of linguistics with the term de reductio (Chiusaroli, 1998 and 2001), which for other things puts itself at the origin of the algorithm of the search engine (Eco, 1993). The web structure and the internal organization of the CMC enable us to use the Twitter hashtag as an emblematic case study to test the effectiveness of a method that combines the consideration of the functional power of the graphic string with the relevance of the semantic content plan: a intersection of different factors that must be mutually dependent for the correct verification of the data; an integrated theory of (web-)knowledge based on writing (Ong, 2002).
| 6
|
Sentiment, Emotion, Irony, Hate
|
25_2014
| 2,014
|
Morena Danieli, Giuseppe Riccardi, Firoj Alam
|
Annotation of Complex Emotions in Real-Life Dialogues: The Case of Empathy
|
ENG
| 3
| 1
| 1
|
Università di Trento
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
In this paper we discuss the problem of an-notating emotions in real- life spoken conversations by investigat- ing the special case of empathy. We pro- pose an annotation model based on the situated theories of emotions. The anno- tation scheme is directed to ob-serve the natural unfolding of empathy during the conversations. The key component of the protocol is the identification of the anno- tation unit based both on linguistic and paralinguistic cues. In the last part of the paper we evaluate the reliability of the annotation model.
|
The work we present is part of a research project aiming to provide scientific evidence for the sit- uated nature of emotional processes. In particular we investigate the case of complex social emo- tions, like empathy, by seeing them as relational events that are recognized by observers on the basis of their unfolding in human interactions. The ultimate goals of our research project are a) understanding the multidimensional signals of empathy in human conversations, and b) generat- ing a computational model of basic and complex emotions. A fundamental requirement for build- ing such computational systems is the reliability of the annotation model adopted for coding real life conversations. Therefore, in this paper, we will focus on the annotation scheme that we are using in our project by illustrating the case of empathy annotation. Empathy is often defined by metaphors that evoke the emotional or intellectual ability to identify another person’s emotional states, and/or to understand states of mind of the others. The word “empathy” was introduced in the psychological literature by Titchener in 1909 for translating the German term “Einfühlung”. Nowadays it is a common held opinion that empathy encompasses several human interaction abilities. The concept of empathy has been deeply investigated by cognitive scientists and neuroscientists, who proposed the hypothesis according to which empathy underpins the social competence of reconstructing the psychic processes of another person on the basis of the possible identification with his/her internal world and actions (Sperber & Wilson, 2002; Gallese, 2003). Despite the wide use of the notion of empathy in the psychological research, the concept is still vague and difficult to measure. Among psychologists there is little consensus about which signals subjects rely on for recognizing and echoing empathic responses. Also the uses of the concept by the computational attempts to repro-duce empathic behavior in virtual agents seem to be suffering due to the lack of operational definitions. Since the goal of our research is addressing the problem of automatic recognition of emotions in real life situations, we need an operational model of complex emotions, including empathy, focused on the unfolding of the emotional events. Our contribution to the design of such a model assumes that processing the discriminative characteristics of acoustic, linguistic, and psycholinguistic levels of the signals can support the automatic recognition of empathy in situated human conversations. The paper is organized as follows: in the next Section we introduce the situated model of emotions underlying our approach, and its possible impact on emotion annotation tasks. In Section 3 we describe our annotation model, its empirical bases, and reliability evaluation. Finally, we discuss the results of lexical features analysis and ranking
|
In this paper we propose a protocol for annotat- ing complex social emotions in real-life conver- sations by illustrating the special case of empa- thy. The definition of our annotation scheme is empirically-driven and compatible with the situ- ated models of emotions. The difficult goal of annotating the unfolding of the emotional pro- cesses in conversations has been approached by capturing the transitions between neutral and emotionally connoted speech events as those transitions manifest themselves in the melodic variations of the speech signals.
| 6
|
Sentiment, Emotion, Irony, Hate
|
26_2014
| 2,014
|
Irene De Felice, Roberto Bartolini, Irene Russo, Valeria Quochi, Monica Monachini
|
Evaluating ImagAct-WordNet mapping for English and Italian through videos
|
ENG
| 5
| 4
| 4
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
In this paper we present the results of the evaluation of an auto-matic mapping between two lexical re-sources, WordNet/ItalWordNet and Ima-gAct, a conceptual ontology of action types instantiated by video scenes. Results are compared with those obtained from a previous experiment performed only on Italian data. Differences between the two evaluation strategies, as well as between the quality of the mappings for the two languages considered in this paper, are dis-cussed.
|
In lexicography, the meaning of words is repre-sented through words: definitions in dictionaries try to make clear the denotation of lemmas, report-ing examples of linguistic usages that are funda-mental especially for function words like preposi-tions. Corpus linguistics derives definitions from a huge amount of data. This operation improves words meaning induction and refinements, but still supports the view that words can be defined by words. In the last 20 years dictionaries and lexicographic resources such as WordNet have been enriched with multimodal content (e.g. illustrations, pic-tures, animations, videos, audio files). Visual representations of denotative words like concrete nouns are effective: see for example the ImageNet project, that enriches WordNets glosses with pic-tures taken from the web. Conveying the meaning of action verbs with static representations is not possible; for such cases the use of animations and videos has been proposed (Lew 2010). Short videos depicting basic actions can support the users need (especially in second language acquisition) to understand the range of applicability of verbs. In this paper we describe the multimodal enrichment of Ital- WordNet and WordNet 3.0 action verbs entries by means of an automatic mapping with ImagAct (www.imagact.it), a conceptual ontology of action types instantiated by video scenes (Moneglia et al. 2012). Through the connection between synsets and videos we want to illustrate the meaning described by glosses, specifying when the video represents a more specific or a more generic action with respect to the one described by the gloss. We evaluate the mapping watching videos and then finding out which, among the synsets related to the video, is the best to describe the action performed.
|
Mutual enrichments of lexical resources is convenient, especially when different kinds of information are available. In this paper we describe the mapping between ImagAct videos representing action verbs’ meanings and Word- Net/ItalWordNet, in order to enrich the glosses multimodally. Two types of evaluation have been performed, one based on a gold standard that establishes correspondences between ImagActs basic action types and ItalWordNets synsets (Bartolini at al. 2014) and the other one based on the suitability of a synsets gloss to describe the action watched in the videos. The second type of evaluation suggests that for Italian the automatic mapping is effective in projecting the videos on Ital- WordNet’s glosses. For what regards the mapping for English, as future work we plan to change the settings, in order to test if the number of synonyms available in WordNet has a negative impact on the quality of the mapping.
| 7
|
Lexical and Semantic Resources and Analysis
|
27_2014
| 2,014
|
Irene De Felice, Margherita Donati, Giovanna Marotta
|
CLaSSES: a new digital resource for Latin epigraphy
|
ENG
| 3
| 3
| 1
|
Università di Pisa, CNR-ILC
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
CLaSSES (Corpus for Latin Socio- linguistic Studies on Epigraphic textS) is an annotated corpus for quantitative and qualita- tive sociolinguistic analyses on Latin inscrip- tions. It allows specific researches on phono- ological and morphophonological phenomena of non-standard Latin forms with crucial ref- erence to the typology of the text, its origin and chronological collocation. This paper presents the first macrosection of CLaSSES, focused on the inscriptions from the archaic- early period.
|
Available digital resources for Latin epigraphy include some important databases. The Clauss- Slaby database (http://www.manfredclauss.de/gb/index.html) records almost all Latin inscriptions (by now 696.313 sets of data for 463.566 inscriptions from over 2.480 publications), including also some pictures. It can be searched by records, province, place and specific terms, thus provid- ing users with quantitative information. The Epi- graphic Database Roma EDR (http://www.edr- edr.it/English/index_en.php) is part of the international federation of Epigraphic Databases called Elec- tronic Archive of Greek and Latin Epigraphy (EAGLE). It is possible to look through EDR both as a single database or together with its partner databases accessing EAGLE’s portal (www.eagle-eagle.it). 1 Although they collect a large amount of data, these resources cannot provide linguists with rich qualitative and quantitative linguistic information focused on specific phenomena. The need for a different kind of information automatically ex- tracted from epigraphic texts is particularly expressing when dealing with sociolinguistic is- sues. There is a current debate on whether inscriptions can provide direct evidence on actual linguistic variations occurring in Latin society or they cannot. As Herman (1985) points out, the debate on the linguistic representativity of inscriptions alternates between totally skeptical and too optimistic approaches. Following Herman (1970, 1978a, 1978b, 1982, 1985, 1987, 1990, 2000), we believe that epigraphic texts can be regarded as a fundamental source for studying variation phenomena, provided that one adopts a critical approach. Therefore, we cannot entirely agree with the skeptical view adopted by Adams (2013: 33-34), who denies the role of inscriptions as a source for sociolinguistic variation in the absence of evidence also from metalinguistic comments by grammarians and literary authors. That said, the current state-of-the-art digital resources for Latin epigraphic texts does not allow researchers to evaluate the relevance of inscriptions for a sociolinguistic study that would like to rely on direct evidence. Furthermore, it is worth noting that within the huge amount of epigraphic texts available for the Latin language not every inscription is equally significant for linguistic studies: e.g., many inscriptions are very short or fragmentary, others are manipulated or intentionally archaising. Obviously, a (socio) linguistic approach to epigraphic texts should take into account only linguistically significant texts.
|
CLaSSES is an epigraphic Latin corpus for quan- titative and qualitative sociolinguistic analyses on Latin inscriptions, that can be useful for both historical linguists and philologists. It is annotat- ed with linguistic and metalinguistic features which allow specific queries on different levels of non-standard Latin forms. We have here presented the first macrosection of CLaSSES, containing inscriptions from the archaic-early period. In the next future we will collect comparable sub-corpora for the Classical and the Imperial period. Moreover, data will be organized in a database available on the web.
| 5
|
Latin Resources
|
28_2014
| 2,014
|
Jose' Guilherme Camargo De Souza, Marco Turchi, Matteo Negri, Antonios Anastasopoulos
|
Online and Multitask learning for Machine Translation Quality Estimation in Real-world scenarios
|
ENG
| 4
| 0
| 0
|
Fondazione Bruno Kessler, Università di Trento, University of Notre Dame
| 3
| 1
| 0
| 1
|
Antonios Anastasopoulos
| 0
|
0
|
Indiana (USA), Italy
|
Notre Dame, Trento
|
We investigate the application of different supervised learning approaches to machine translation quality estimation in realistic conditions where training data are not available or are heterogeneous with respect to the test data. Our experiments are carried out with two techniques: on-line and multitask learning. The former is capable to learn and self-adapt to user feedback, and is suitable for situations in which training data is not available. The latter is capable to learn from data com-ing from multiple domains, which might considerably differ from the actual testing domain. Two focused experiments in such challenging conditions indicate the good potential of the two approaches.
|
Quality Estimation (QE) for Machine Translation (MT) is the task of estimating the quality of a translated sentence at run-time and without access to reference translations (Specia et al., 2009; Soricut and Echihabi, 2010; Bach et al., 2011; Specia, 2011; Mehdad et al., 2012; C. de Souza et al., 2013; C. de Souza et al., 2014a). As a quality indicator, in a typical QE setting, automatic systems have to predict either the time or the number of editing operations (e.g. in terms of HTER1) required to a human to transform the translation into a syntactically/semantically correct sentence. In recent years, QE gained increasing interest in the MT community as a possible way to: i) decide whether a given translation is good enough for publishing as is, ii) inform readers of the target language only whether or not they can rely on a translation, iii) filter out sentences that are not good enough for post-editing by professional translators, or iv) select the best translation among options from multiple MT and/or translation memory systems. So far, despite its many possible applications, QE research has been mainly conducted in controlled lab testing scenarios that disregard some of the possible challenges posed by real working conditions. Indeed, the large body of research resulting from three editions of the shared QE task organized within the yearlyWorkshop on Machine Translation (WMT – (Callison-Burch et al., 2012; Bojar et al., 2013; Bojar et al., 2014)) has relied on simplistic assumptions that do not always hold in real life. These assumptions include the idea that the data available to train QE models is: i) large (WMT systems are usually trained over datasets of 800/1000 instances) and ii) representative (WMT training and test sets are always drawn from the same domain and are uniformly distributed).In order to investigate the difficulties of training a QE model in realistic scenarios where such conditions might not hold, in this paper we approach the task in situations where: i) training data is not available at all (x2), and ii) training instances come from different domains (x3). In these two situations, particularly challenging from the machine learning perspective, we investigate the potential of online and multitask learning methods (the former for dealing with the lack of data, and the latter to cope with data heterogeneity), comparing them with the batch methods currently used.
|
We investigated the problem of training reliable QE models in particularly challenging conditions from the learning perspective. Two focused experiments have been carried out by applying: i) online learning to cope with the lack of training data, and ii) multitask learning to cope with heterogeneous training data. The positive results of our experiments suggest that the two paradigms should be further explored (and possibly combined) to overcome the limitations of current methods and make QE applicable in real-world scenarios.
| 10
|
Machine Translation
|
29_2014
| 2,014
|
Rodolfo Delmonte
|
A Computational Approach to Poetic Structure, Rhythm and Rhyme
|
ENG
| 1
| 0
| 0
|
Università Ca' Foscari Venezia
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Venice
|
In this paper we present SPARSAR, a system for the automatic analysis of English and Italian poetry. The system can work on any type of poem and produces a set of parameters that are then used to compare poems with one another, of the same author or of different authors. In this paper, we will concentrate on the second module, which is a rule-based system to represent and analyze poetic devices. Evaluation of the system on the basis of a manually created dataset - including poets from Shakespeare's time down to T.S.Eliot and Sylvia Plath - has shown its high precision and accuracy approximating 90%.
|
In this paper we present SPARSAR1, a system for the automatic analysis of English and Italian poetry. The system can work on any type of poem and produces a set of parameters that are then used to compare poems with one another, of the same author or of different authors. The output can be visualized as a set of coloured boxes of different length and width and allows a direct comparison between poems and poets. In addition, parameters produced can be used to evaluate best similar candidate poems by different authors by means of Pearson's correlation coefficient. The system uses a modified version of VENSES, a semantically oriented NLP pipeline (Delmonte et al., 2005). It is accompanied by a module that works at sentence level and produces a whole set of analysis both at quantitative, syntactic and semantic level. The second module is a rulebased system that converts each poem into phonetic characters, it divides words into stressed/unstressed syllables and computes rhyming schemes at line and stanza level. To this end it uses grapheme to phoneme translations made available by different sources, amounting to some 500K entries, and include CMU dictionary, MRC Psycholinguistic Database, Celex Database, plus our own database made of some 20,000 entries. Out of vocabulary words are computed by means of a prosodic parser we implemented in a previous project (Bacalu & Delmonte, 1999a,b). The system has no limitation on type of poetic and rhetoric devices, however it is dependent on language: Italian line verse requires a certain number of beats and metric accents which are different from the ones contained in an English iambic pentameter. Rules implemented can demote or promote word-stress on a certain syllable depending on selected language, linelevel syllable length and contextual information. This includes knowledge about a word being part of a dependency structure either as dependent or as head. A peculiar feature of the system is the use of prosodic measures of syllable durations in msec, taken from a database created in a previous project(Bacalu & Delmonte, 1999a,b). We produce a theoretic prosodic measure for each line and stanza using mean durational values associated to stressed/ unstressed syllables. We call this index, "prosodic-phonetic density index", because it contains count of phones plus count of theoretic durations: the index is intended to characterize the real speakable and audible consistency of each line of the poem. A statistics is issued at different levels to evaluate distributional properties in terms of standard deviations, skewness and kurtosis. The final output of the system is a parameterized version of the poem which is then read aloud by a TTS system: parameters are generated taking into account all previous analysis including sentiment or affective analysis and discourse structure, with the aim to produce an expressive reading. This paper extends previous conference and demo work (SLATE, Essem, EACL), and concentrates on the second module which focuses on poetic rhythm. The paper is organized as follows: the following section 2 is devoted to present the main features of the prosodicphonetic system with some example; we then present a conclusion and future work.
|
We have done a manual evaluation by analysing a randomly chosen sample of 50 poems out of the 500 analysed by the system. The evaluation has been made by a secondary school teacher of English literature, expert in poetry. We asked the teacher to verify the following four levels of analysis: 1. phonetic translation 2. syllable division; 3. feet grouping; 4. metrical rhyming structure. Results show a percentage of error which is around 5% as a whole, in the four different levels of analysis, thus subdivided: 1.8 for parameter 1; 2.1 for parameter 2; 0.3 for parameter 3; 0.7 for parameter 4.
| 6
|
Sentiment, Emotion, Irony, Hate
|
30_2014
| 2,014
|
Rodolfo Delmonte
|
A Reevaluation of Dependency Structure Evaluation
|
ENG
| 1
| 0
| 0
|
Università Ca' Foscari Venezia
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Venice
|
In this paper we will develop the argument indirectly raised by the organizer of 2014 Dependency Parsing for Information Extraction task when they classify 19 relations out of 45 as those semantically relevant for the evaluation, and exclude the others which confirms our stance which considers the new paradigm of Dependency parsing evaluation favoured in comparison to the previous parsing scheme based mainly on constituent or phrase structure evaluation. We will also speak in favour of rule-based dependency parsing and against statistically based dependency parsers for reasons related to the role played by the SUBJect relation in Italian.
|
In this paper I will question the currently widely spread assumption that Dependency Structures (hence DS) are the most convenient syntactic representation, when compared to phrase or constituent structure. I will also claim that evaluation metrics applied to DS are somehow "boasting" its performance with respect to phrase structure (hence PS) representation, without a real advantage, or at least it has not yet been proven there is one. In fact, one first verification has been achieved by this year Evalita Campaign which has introduced a new way of evaluating Dependency Structures, called DS for Information Extraction - and we will comment on that below1. In the paper I will also argue that some features of current statistical dependency parsers speak against the use of such an approach to the parsing of languages like Italian which have a high percentage of non-canonical structures (hence NC). In particular I will focus on problems raised by the way in which SUBJect arguments are encoded. State of the art systems are using more and more dependency representations which have lately shown great resiliency, robustness, scalability and great adaptability for semantic enrichment and processing. However, by far the majority of systems available off the shelf don’t support a fully semantically consistent representation and lack Empty or Null Elements (see Cai et al. 2001)2. O.Rambow (2010) in his opinion paper on the relations between dependency and phrase structure representation has omitted to mention the most important feature that differentiates them. PS evaluation is done on the basis of Brackets, where each bracket contains at least one HEAD, but it may contain other Heads nested inside. Of course, it may also contain a certain number of minor categories which however don’t count for evaluation purposes. On the contrary, DS evaluation is done on the basis of head-dependent relations intervening between a pair of TOKENs. So on the one side, F-measure evaluates number of brackets which coincide with number of Heads; on the other side it evaluates number of TOKENS. Now, the difference in performance is clearly shown by percent accuracy obtained with PS evaluation which for Italian was contained in a range between 70% to 75% in Evalita 2007, and between 75% and 80% in Evalita 2009 – I don’t take into account 2011 results which are referred to only one participant. DS evaluation reached peaks of 95% for UAS and in between 84% and 91% for LAS evaluation. Since data were the same for the two campaigns, one wonders what makes one representation more successful than the other. Typically, constituent parsing is evaluated on the basis of constituents, which are made up of a head and an internal sequence of minor constituents dependent on the head. What is really important in the evaluation is the head of each constituent and the way in which PS are organized, and this corresponds to bracketing. On the contrary, DS are organized on the basis of a “word level grammar”, so that each TOKEN constributes to the overall evaluation, including punctuation (not always). Since minor categories are by far the great majority of the tokens making up a sentence – in Western languages, but no so in Chinese, for instance (see Yang & Xue, 2010)– the evaluation is basically made on the ability of the parser to connect minor categories to their heads. What speaks in favour of adopting DS is the clear advantage gained in the much richer number of labeled relations which intervene at word level, when compared to the number of constituent labels used to annotate PS relations3. It is worth while noting that DS is not only a much richer representation than PS, but it encompasses different levels of linguistic knowledge. For instance, punctuation may be used to indicate appositions, parentheticals, coordinated sets, elliptical material, subdivision of complex sentences into main and subordinate clause. The same applies to discourse markers which may be the ROOT of a sentence. These have all to be taken into account when computing DS but not with PS parsing.
|
In this paper I tried to highlight critical issues on the current way of evaluating DS which indirectly "boasts" the performance of the parsers when compared to phrase structure evaluation. I assume this is due to the inherent shortcoming of DS evaluation not considering semantically relevant grammatical relations as being more important than minor categories. Statistical dependency parsers may have more problems in encoding features of Italian Subject because of its multiple free representations. For this reasons, I argued in favour of rule-based dependency parsers and I presented in particular, one example from TULETUT, a deep parser of Italian.
| 4
|
Syntax and Dependency Treebanks
|
31_2014
| 2,014
|
Rodolfo Delmonte
|
Analisi Linguistica e Stilostatistica - Uno Studio Predittivo sul Campo
|
ITA
| 1
| 0
| 0
|
Università Ca' Foscari Venezia
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Venice
|
In this work we present a field study to define a precise evaluation scheme for text styling that has been used to establish a graduation of various documents based on their persuasion skills and readability. The study concerns the documents of the political programs published on a public forum by candidates to the University of Ca’ Foscari – Venice. The documents were analyzed by our system and a graduation was created on the basis of scores associated with eleven parameters. After the vote, we created the graduation and we discovered that the system had predicted the name of the real winner in advance. The results were published in a local newspaper.
|
The analysis begins with the idea that the style of a programming document is composed of quantitative elements at the word level, of elements derived from the frequent use of certain synthetic structures and from unquisitely semantic and pragmatic characteristics such as the use of words and concepts that inspire positivity. I conducted the analysis starting from the texts available on the web or received from the candidates, using a series of parameters that I created for the analysis of the political speech in the Italian newspapers during the last and last government crisis. The results are published in some work nationally and internationally that I have listed in a brief bibliography. The analysis uses classical quantitative data such as the types/tokens ratio and then introduces information derived from the GETARUNS system that makes a complete parsing of texts from a syntax, semantic and pragmatic point of view. The data listed in the tables are derived from the system output files. The system produces a file for each phrase, a comprehensive file for the semantic analysis of the text and a file with the verticalized version of the text analyzed where each word is accompanied by a synthetic-semanticapragmatic classification. The system consists of a parser to transition networks increased by subcategory information, which first builds the chunks and then cascades the structures to higher complex constituents up to that of phrase. This representation is translated to another view that works on islands, starting from each verbal complex, corresponding to the verbal constituent. The parser to islands identifies the predicated-argumental structure, including the additions on the basis of the information contained in a subcategory for Italian built in previous projects, containing approximately 50,000 verbal and objective entries at different levels of depth. It is also used a list of selection preferences for verbs, names and acts derived from Italian treebanks available and containing approximately 30,000 income. In Table 1, we report the data in absolute numbers.
|
If you want to do a comprehensive graduation, you can consider that each parameter may have a positive or negative value. If it is positive, the person with the greater amount will be attributed as a reward the value 5 and the others to scale a value below one point, up to the value 1. In the event that the parameter has a negative value, the candidate with the greater amount will receive the score below 1 and the others to scale a value above one point to 5. The overall graduation will then be shaped making the sum of all the individual points obtained. The attribution of polarity to each parameter follows linguistic and stylistic criteria, and is as follows: 1. NullSubject - positive: The major number of zero subjects indicates the will to create a very coherent text and not to overload the reference to the same entity with repeated or core reference forms. The two. Subjective Props – negative: The majority of proposals that express a subjective content indicates the subject’s tendency to expose his own ideas in a non-objective way. and 3. Negative Props - negative: The major use of negative propositions, i.e. with the use of denial or negative warnings, is a stylistic trait that is not propositive but tends to contradict what said or done by others. and 4. Non-factive Props - negative: The use of non-factive propositions indicates the stylistic tendency to expose their own ideas using unrealistic verbal times and ways - conjunctive, conditional, future and indefinite times. 5 of 5. Props / Sents - negative: The ratio that indicates the number of propositions per phrase is considered negative to mean that the higher the greater is the complexity of the style. 6 of 6. Negative Ws - negative: The number of negative words used in proportion to the total number of words has a negative value. 7 of 7. Positive Ws - positive: The number of positive words used in proportion to the total number of words has a positive value. 8 of 8. Passive Diath - negative: The number of passive forms used is considered negative as dark the agent of the action described. 9 of 9. Token / Sents - negative: The number of tokens in relation to the expressed phrases is treated as a negative factor again in relation to the problem of the induced complexity. 10 of 10. Vr - Rw - negative: This measure considers vocabulary wealth based on the so-called RareWords, or the total number of Hapax/Dis/Tris Legomena in the Rank List. The more unique or less frequent words the more complex the style is. 11 of 11. Vr - Tt - negative: As above, this time considering the total number of Types. The award of the score on the basis of the criteria indicated defines the following final graduation: Bugliesi 47 LiCalzi 36 Brugiavini 28 Cardinaletti 27 Bertinetti 27 Table 2. Final graduation based on the 11 parameters (see Tab. 2.1 in Appendix 2) Wishing to include also the points relating to the use of PERSONALE and its names we will have this overall result: Bulliesi 53 LiCalzi 44 Brugiavini 37 Cardinaletti 31 Bertinetti 30 *Table 3. Final graduation based on the 13 parameters (see Tab. 3.1 in Appendix 2) Using the parameters as judgment elements to classify the style of the candidates and assigning a word assessment, the two subsequent judgments are obtained. 1 of 1. Bugliesi has won perchŽ has used a more coherent style, with a simpler vocabulary, of simple and direct synthetic structures, expressing the contents in a concrete and fatal way, speaking at all levels of stakeholders, teachers and non- teachers. He also used fewer negative expressions and phrases and more positive expressions. The data also tells us that the Bugliesi program is in strong correlation with that of LiCalzi but not with that of the other candidates. Cardinaletti has written a program that uses a uncohered style, with a somewhat elaborate vocabulary, with quite more complex synthetic structures, expressing the contents in a much less concrete and much less fatal way, speaking to all levels of stakeholders, teachers and non- teachers. He also used few negative expressions and phrases and relatively few positive expressions. Finally, the Cardinaletti program is in good correlation with the Brugiavini program.
| 11
|
Text Simplification
|
32_2014
| 2,014
|
Marina Ermolaeva
|
An adaptable morphological parser for agglutinative languages
|
ENG
| 1
| 1
| 1
|
Moscow State University
| 1
| 1
| 1
| 1
|
Marina Ermolaeva
| 0
|
0
|
Russia
|
Moscow
|
The paper reports the state of the ongoing work on creating an adaptable morphological parser for various agglutinative languages. A hybrid approach involving methods typically used for non-agglutinative languages is proposed. We explain the design of a working prototype for inflectional nominal morphology and demonstrate its work with an implementation for Turkish language. An additional experiment of adapting the parser to Buryat (Mongolic family) is discussed.
|
The most obvious way to perform morphological parsing is to make a list of all possible morphological variants of each word. This method has been successfully used for nonagglutinative languages, e.g. (Segalovich 2003) for Russian, Polish and English. Agglutinative languages pose a much more complex task, since the number of possible forms of a single word is theoretically infinite (Jurafsky and Martin 2000). Parsing languages like Turkish often involves designing complicated finite-state machines where each transition corresponds to a single affix (Hankamer 1986; Eryiğit and Adalı 2004; Çöltekin 2010; Sak et al. 2009; Sahin et al. 2013). While these systems can perform extremely well, a considerable redesigning of the whole system is required in order to implement a new language or to take care of a few more affixes. The proposed approach combines both methods mentioned above. A simple finite-state machine allows to split up the set of possible affixes, producing a finite and relatively small set of sequences that can be easily stored in a dictionary. Most systems created for parsing agglutinative languages, starting with (Hankamer 1986) and (Oflazer 1994), process words from left to right: first stem candidates are found in a lexicon, then the remaining part is analyzed. The system presented in this paper applies the right-to-left method (cf. (Eryiğit and Adalı 2004)): affixes are found in the first place. It can ultimately work without a lexicon, in which case the remaining part of the word is assumed to be the stem; to improve precision of parsing, it is possible to compare it to stems contained in a lexicon. A major advantage of right-to-left parsing is the ability to process words with unknown stems without additional computations. Multi-language systems (Akın and Akın 2007; Arkhangelskiy 2012) are a relatively new tendency. With the hybrid approach mentioned above, the proposed system fits within this trend. As the research is still in progress, the working prototype of the parser (written in Python language) is currently restricted to nominal inflectional morphology. Within this scope, it has been implemented for Turkish; an additional experiment with Buryat language is discussed in the section 5.
|
At the moment, the top-importance task is lifting the temporary limitations of the parser by implementing other parts of speech (finite and non-finite verb forms, pronouns, postpositions etc.) and derivational suffixes. Although the slot system described in 3.1 has been sufficient for both Turkish and Buryat, other agglutinative languages may require more flexibility. This can be achieved either by adding more slots (thus making the slot system nearly universal) or by providing a way to derive the slot system automatically, from plain text or a corpus of tagged texts; the latter solution would also considerably reduce the amount of work that has to be done manually. Another direction of future work involves integrating the parser into a more complex system. DIRETRA, an engine for Turkish-to-English direct translation, is being developed on the base of the parser. The primary goal is to provide a word-for-word translation of a given text, reflecting the morphological phenomena of the source language as precisely as possible. The gloss lines output by the parser are processed by the other modules of the system and ultimately transformed into text representations in the target language:Though the system is being designed for Turkish, the next step planned is to implement other Turkic languages as well.
| 7
|
Lexical and Semantic Resources and Analysis
|
33_2014
| 2,014
|
Lorenzo Ferrone, Fabio Massimo Zanzotto
|
Distributed Smoothed Tree Kernel
|
ENG
| 2
| 0
| 0
|
Università di Roma Tor Vergata
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome
|
In this paper we explore the possibility to merge the world of Compositional Distributional Semantic Models (CDSM) with Tree Kernels (TK). In particular, we will introduce a specific tree kernel (smoothed tree ker-nel, or STK) and then show that is possibile to approximate such kernel with the dot product of two vectors obtained compositionally from the sen-tences, creating in such a way a new CDSM.
|
Compositional distributional semantics is a flourishing research area that leverages dis-tributional semantics (see Baroni and Lenci (2010)) to produce meaning of simple phrases and full sentences (hereafter called text frag-ments). The aim is to scale up the success of word-level relatedness detection to longer fragments of text. Determining similarity or relatedness among sentences is useful for many applications, such as multi-document summar-ization, recognizing textual entailment (Dagan et al., 2013), and semantic textual similarity detection (Agirre et al., 2013; Jurgens et al., 2014). Compositional distributional semantics models (CDSMs) are functions mapping text fragments to vectors (or higher-order tensors). Functions for simple phrases directly map distributional vectors of words to distributional vectors for the phrases (Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Zanzotto et al., 2010). Functions for full sentences are generally de ned as recursive functions over the ones for phrases (Socher et al., 2011). Distributional vectors for text fragments are then used as inner layers in neural networks, or to compute similarity among text fragments via dot product. CDSMs generally exploit structured representations tx of text fragments x to derive their meaning f(tx), but the structural information, although extremely important, is obfuscated in the nal vectors. Structure and meaning can interact in unexpected ways when computing cosine similarity (or dot product) between vectors of two text fragments, as shown for full additive models in (Ferrone and Zanzotto, 2013). Smoothed tree kernels (STK) (Croce et al., 2011) instead realize a clearer interaction between structural information and distributional meaning. STKs are speci c realizations of convolution kernels (Haussler, 1999) where the similarity function is recursively (and, thus, compositionally) computed. Distributional vectors are used to represent word meaning in computing the similarity among nodes. STKs, however, are not considered part of the CDSMs family. As usual in kernel machines (Cristianini and Shawe-Taylor, 2000), STKs directly compute the similarity between two text fragments x and y over their tree representations tx and ty, that is, STK(tx; ty). The function f that maps trees into vectors is only implicitly used, and, thus, STK(tx; ty) is not explicitly expressed as the dot product or the cosine between f(tx) and f(ty). Such a function f, which is the underlying reproducing function of the kernel (Aronszajn, 1950), is a CDSM since it maps trees to vectors by using distributional meaning. However, the huge nality of Rn (since it has to represent the set of all possible subtrees) prevents to actually compute the function f(t), which thus can only remain implicit. Distributed tree kernels (DTK) (Zanzotto and Dell'Arciprete, 2012) partially solve the last problem. DTKs approximate standard tree kernels (such as (Collins and Du y, 2002)) by de ning an explicit function DT that maps trees to vectors in Rm where m n and Rn is the explicit space for tree kernels. DTKs approximate standard tree kernels (TK), that is, hDT(tx);DT(ty)i TK(tx; ty), by approximating the corresponding reproducing function. Thus, these distributed trees are small vectors that encode structural information. In DTKs tree nodes u and v are represented by nearly orthonormal vectors, that is, vectors !u and !v such that h !u ; !v i ( !u ; !v ) where is the Kroneker's delta. This is in contrast with distributional semantics vectors where h !u ; !v i is allowed to be any value in [0; 1] according to the similarity between the words v and u. In this paper, leveraging on distributed trees, we present a novel class of CDSMs that encode both structure and distributional meaning: the distributed smoothed trees (DST). DSTs carry structure and distributional meaning on a rank-2 tensor (a matrix): one dimension encodes the structure and one dimension encodes the meaning. By using DSTs to compute the similarity among sentences with a generalized dot product (or cosine), we implicitly de ne the distributed smoothed tree kernels (DSTK) which approximate the corresponding STKs. We present two DSTs along with the two smoothed tree kernels (STKs) that they approximate. We experiment with our DSTs to show that their generalized dot products approximate STKs by directly comparing the produced similarities and by comparing their performances on two tasks: recognizing textual entailment (RTE) and semantic similarity detection (STS). Both experiments show that the dot product on DSTs approximates STKs and, thus, DSTs encode both structural and distributional semantics of text fragments in tractable rank-2 tensors. Experiments on STS and RTE show that distributional semantics encoded in DSTs increases performance over structure-only kernels. DSTs are the rst positive way of taking into account both structure and distributional meaning in CDSMs. The rest of the paper is organized as follows. Section 2.1 introduces the basic notation used in the paper. Section 2 describe our distributed smoothed trees as compositional distributional semantic models that can represent both structural and semantic information. Section 4 reports on the experiments. Finally, Section 5 draws some conclusions.
|
Distributed Smoothed Trees (DST) are a novel class of Compositional Distributional Se-mantics Models (CDSM) that effectively en-code structural information and distributional semantics in tractable rank-2 tensors, as ex-periments show. The paper shows that DSTs contribute to close the gap between two appar-ently different approaches: CDSMs and convo-lution kernels. This contribute to start a dis-cussion on a deeper understanding of the rep-resentation power of structural information of existing CDSMs.
| 22
|
Distributional Semantics
|
34_2014
| 2,014
|
Francesca Frontini, Valeria Quochi, Monica Monachini
|
Polysemy alternations extraction using the PAROLE SIMPLE CLIPS Italian lexicon
|
ENG
| 3
| 3
| 1
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
This paper presents the results of an experiment of polysemy alternations induction from a lexicon (Utt and Padó, 2011 Frontini et al., 2014), discussing the results and proposing an amendment in the original algorithm.
|
The various different senses of polysemic words do not always stand to each other in the same way. Some senses group together along certain dimensions of meaning while others stand clearly apart. Machine readable dictionaries have in the past used coarse grained sense distinctions but often without any explicit indication as to whether these senses were related or not. Most significantly, few machine readable dictionaries explicitly encode systematic alternations. In Utt and Pad´o (2011) a methodology is described for deriving systematic alternations of senses from WordNet. In Frontini et al. (2014) the work was carried out for Italian using the PAROLE SIMPLE CLIPS lexicon (PSC) (Lenci et al., 2000), a lexical resource that contains a rich set of explicit lexical and semantic relations. The purpose of the latter work was to test the methodology of the former work against the inventory of regular polysemy relations already encoded in the PSC semantic layer. It is important to notice that this was not possible in the original experiment, as WordNet does not contain such information. The result of the work done on PSC shows how the original methodology can be useful in testing the consistency of encoded polysemies and in finding gaps in individual lexical entries. At the same time the methodology is not infallible especially in distinguishing type alternations that frequently occur in the lexicon due to systematic polysemy from other alternations that are produced by metaphoric extensions, derivation or other non systematic sense shifting phenomena. In this paper we shall briefly outline the problem of lexical ambiguity; then describe the procedure of type induction carried out in the previous experiments, discussing the most problematic results; finally we will propose a change in the original methodology that seems more promising in capturing the essence of systematic polysemy.
|
To conclude, such preliminary results actually seem to confirm the hypothesis that measuring the association strength between types, rather than the frequency of their cooccurrence, is useful to capture the systematicity of an alternation. In future work it may be interesting to test ranking by other association measures (such as Log Likelihood) and with different filternigs. Finally, the original experiment may be repeated on both Italian and English WordNets in order to evaluate the new method on the original lexical resource.
| 7
|
Lexical and Semantic Resources and Analysis
|
35_2014
| 2,014
|
Gloria Gagliardi
|
Rappresentazione dei concetti azionali attraverso prototipi e accordo nella categorizzazione dei verbi generali. Una validazione statistica
|
ITA
| 1
| 1
| 1
|
Università di Firenze
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Florence
|
The article presents the results of a study aimed at assessing the consistency of the ca-tegorization of the business space operated by mother-language noteers for a set of semantically cohesive verbs of the IMA-GACT database (Ogirare’s semantic area). The statistical value-daction, articulated into three tests, is based on the calculation of the inter-tagger agreement in tasks of disambiguation of re-sented concepts using prototypes for images.
|
IMAGACT is an interlinguistic enthology that makes the spectrum of pragmatic variation-as associated with action predicates at average and high frequency in Italian and English (Moneglia et al., 2014). The classes of action that identify the reference entities of the language concepts, referred to in this lessical resource in the form of prototype scenes (Rosch, 1978), were in-docked by the body of spoken by madrelin-gua linguists, through a bottom-up procedure: the mate-rials linguistic were subjected to an arctic-lat procedure of notification described extensively in previous works (Moneglia et al., 2012; Frontini et al., 2012). The article illustrates the results of three tests aimed at assessing the consistency of the categorization of the action space proposed by the noteers for a short but semanticly consistent set of resource verbes: such choice was determined by the will to study at a high level of detail the problems related to the typization of the variation of the preached on events. The predisposition of this case-study is also favorable to the creation of a standard procedure, extended in a second time to statistically significant portions of the oncology for its complete validation. Paragraph 2 shall present the statist coefficients adopted, and Paragraph 3 shall describe the methodology and results of the tests carried out.
|
It is notorious that semantic note tasks, and in particular those dedicated to verbal lessic (Fellbaum, 1998; Fellbaum et al., 2001), record low levels of I.TA.5 In this case the possibility of obtaining high values, even with non-expert noteers, is likely due to the exclusively action and physical nature of the classes used for the categorization. Following the validation it was possible to use the data in applications of a psycholinguistic type (Gagliardi, 2014): the sample of verbs of oncology, broad and at the same time formally controlled, if fully validated could represent an unprecedented source of semantic data for cognitive sciences. For this purpose, as well as for a full educational and computing exploitation of the resource,6 in the near future the illustrated methodology will be extended to a quantitatively and statistically significant portion of the database.
| 7
|
Lexical and Semantic Resources and Analysis
|
36_2014
| 2,014
|
Michel Généreux, Egon W. Stemle, Lionel Nicolas, Verena Lyding
|
Correcting OCR errors for German in Fraktur font
|
ENG
| 4
| 1
| 0
|
EURAC
| 1
| 0
| 0
| 0
|
0
| 0
|
Michel Généreux, Egon W. Stemle, Lionel Nicolas, Verena Lyding
|
Italy
|
Bolzano
|
In this paper, we present on-going experiments for correcting OCR er-rors on German newspapers in Fraktur font. Our approach borrows from techniques for spelling correction in context using a prob-abilistic edit-operation error model and lexical resources. We highlight conditions in which high error reduction rates can be obtained and where the approach currently stands with real data.
|
The OPATCH project (Open Platform for Access to and Analysis of Textual Documents of Cul-tural Heritage) aims at creating an advanced online search infrastructure for research in an historical newspapers archive. The search experience is en-hanced by allowing for dedicated searches on per-son and place names as well as in defined subsec-tions of the newspapers. For implementing this, OPATCH builds on computational linguistic (CL) methods for structural parsing, word class tagging and named entity recognition (Poesio et al., 2011). The newspaper archive contains ten newspapers in German language from the South Tyrolean region for the time period around the First World War. Dating between 1910 and 1920, the newspapers are typed in the blackletter Fraktur font and paper quality is derogated due to age. Unfortunately, such material is challenging for optical character recognition (OCR), the process of transcribing printed text into computer readable text, which is the first necessary pre-processing step for any further CL processing. Hence, in OPATCH we are starting from majorly error-prone OCR-ed text, in quantities that cannot realistically be corrected manually. In this paper we present attempts to automate the procedure for correcting faulty OCR-ed text.
|
The approach we presented to correct OCR er-rors considered four features of two types: edit-distance and n-grams frequencies. Results show that a simple scoring system can correct OCR-ed texts with very high accuracy under idealized con-ditions: no more than two edit operations and a perfect dictionary. Obviously, these conditions do not always hold in practice, thus an observed er-ror reduction rate drops to 10%. Nevertheless, we can expect to improve our dictionary coverage so that very noisy OCR-ed texts (i.e. 48% error with distance of at least three to target) can be corrected with accuracies up to 20%. OCR-ed texts with less challenging error patterns can be corrected with accuracies up to 61% (distance 2) and 86% (dis-tance 1).
| 7
|
Lexical and Semantic Resources and Analysis
|
37_2014
| 2,014
|
Carlo Geraci, Alessandro Mazzei, Marco Angster
|
Some issues on Italian to LIS automatic translation. The case of train announcements
|
ENG
| 3
| 0
| 0
|
CNRS, Università di Torino, Libera Università di Bolzano
| 3
| 1
| 0
| 1
|
Carlo Geraci
| 0
|
0
|
France, Italy
|
Paris, Turin, Bolzano
|
In this paper we present some linguistic issues of an automatic transla-tor from Italian to Italian Sign Language (LIS) and how we addressed them.
|
Computational linguistic community showed a growing interest toward sign languages. Several projects of automatic translation into signed languages (SLs) recently started and avatar technology is becoming more and more popular as a tool for implementing automatic translation into SLs (Bangham et al. 2000, Zhao et al. 2000, Huenerfauth 2006, Morrissey et al. 2007, Su and Wu 2009). Current projects investigate relatively small domains in which avatars may perform decently, like post office announcements (Cox et al., 2002), weather forecasting (Verlinden et al., 2002), the jurisprudence of prayer (Almasoud and Al-Khalifa, 2011), driver’s license renewal (San-Segundo et al., 2012), and train announcements (e.g. Braffort et al. 2010, Ebling/Volk 2013). LIS4ALL is a project of automatic translation into LIS where we faced the domain of public transportation announcements. Specifically, we are developing a system of automatic translations of train station announcements from spoken Italian into LIS. The project is the prosecution of ATLAS, a project of automatic translation into LIS of weather forecasting (http://www.atlas. polito.it/index.php/en). In ATLAS two distinct approaches to automatic translation have been adopted, interlingua rule-based translation and statistical translation (Mazzei et al. 2013, Tiotto et al., 2010, Hutchins and Somer 1992). Both approaches have advantages and drawbacks in the specific context of automatic translation into SL. The statistical approach provides greater robustness while the symbolic approaches is more precise in the final results. A preliminary evaluation of the systems developed for ATLAS showed that both approaches have similar results. However, the symbolic approach we implemented produces the structure of the sentence in the target language. This information is used for the automatic allocation of the signs in the signing space for LIS (Mazzei et al. 2013), an aspect not yet implemented in current statistical approaches. LIS4ALL only uses the symbolic (rule-based) translation architecture to process the Italian input and generate the final LIS string. With respect to ATLAS, two main innovations characterize this project: new linguistic issues are addressed; the translation architecture is partially modified. As for the linguistic issues: we are enlarging the types of syntactic constructions covered by the avatar and we are increasing the electronic lexicon built for ATLAS (around 2350 signs) by adding new signs (around 120) specific to the railway domain. Indeed, this latter was one of the most challenging aspects of the project especially when the domain of train stations is addressed. Prima facie this issue would look like a special case of proper names, something that should be easily addressed by generating specific signs (basically one for every station). However, the solution is not as simple as it seems. Indeed, several problematic aspects are hidden when looking at the linguistic situation of names in LIS (and more generally in SL). As for the translation architecture, while in ATLAS a real interlingua translation with a deep parser and a FoL meaning representation were used, in LIS4ALL, we decided to employ a regular-expression-based analyzer that produces a simple (non recursive) filler/slot based semantic to parse the Italian input. This is so, because in the train announcement domain, input sentences have a large number of complex noun phrases with several prepositional phrases, resulting in a degraded parser performance (due to multiple attachment options). Moreover, the domain of application is extremely regular since the announcements are generated by predefined paths (RFI, 2011). The rest of the paper is organized as follows: Section 2 discusses the linguistic issues, Section 3 discusses the technical issues while Section 4 concludes the paper.
|
In this paper we considered two issues related to the development of an automatic translator from Italian to LIS in the railway domain. These are: 1) some syntactic mismatches between input and target languages; and 2) how to deal with lexical gaps due to unknown train station names. The first issue emerged in the creation of a parallel Italian-LIS corpus: the specificity of the domain allowed us to use a naive parser based on regular expressions, a semantic interpreter based on filler/slot semantics, a small CCG in generation. The second issue has been addressed by blending written text into a special “sign”. In the next future we plan to quantitatively evaluate our translator.
| 7
|
Lexical and Semantic Resources and Analysis
|
38_2014
| 2,014
|
Andrea Gobbi, Stefania Spina
|
ConParoleTue: crowdsourcing al servizio di un Dizionario delle Collocazioni Italiane per Apprendenti (Dici-A)
|
ITA
| 2
| 1
| 0
|
Università di Salerno, Università per Stranieri di Perugia
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Salerno, Perugia
|
ConParoleTue is a crowdsourcing experiment in L2 lessicography. Starting from the establishment of a dictionary of locations for Italian L2, ConParoleTue represents an attempt to re-incorporate problems typical of lessicographic processing (the quality and the record of definitions) towards a greater center of the communication needs of those who learn. For this purpose, a crowdsourcing-based methodology is used for the drafting of definitions. This article describes this methodology and presents a first assessment of its results: the definitions obtained through crowdsourcing are quantitatively relevant and qualitatively suitable for non-native Italian speakers.
|
ConParoleTue (2012) is an application experiment of crowdsourcing within the framework of L2 lessicography, developed within the APRIL Project (Spina, 2010b) of the University for Foreigners of Perugia during the creation of a dictionary of locations for Italian L2 apprentices. The locations have occupied a leading place for several decades in studies on the learning of a second language (Meunier and Granger, 2008). This location is recognized as a key competence for an apprenticeship, because it plays a key role in the two aspects of production (for example, it provides pre-built and ready-to-use lessical blocks, improving fluency; Schmitt, 2004) and understanding (Lewis, 2000). Also within the scope of Italian lessicography the research on locations has been productive, and has led, in the last five years, to the publication of at least three paper dictionaries of Italian locations: Urz" (2009), born in the field of translation; Tiberii (2012) and Lo Cascio (2013). The DICI-A (Dizionario de las Colocaciones Italianas para Apprendentes; Spina, 2010a; 2010b) consists of the 11,400 Italian locations extracted from the Perugia Corpus, a reference corpus of Italian written and spoken contemporary1. Among the many proposals, the basic definition of the constitution of the DICI-A is that of Evert (2005), according to which a location is A word combination whose semantic and/or syntactic properties cannot be fully predicted from those of its components, and which therefore has to be listed in a lexiconÓ. The locations of the DICI-A belong to 9 different categories, selected on the basis of the most productive sequences of grammar categories that make up them: aggettive- name (tragic error), name-aggettive (the next year), name-name (weight form), verbo- (art.(a) name (to make a request/to make a penalty), name- preposition-name (credit card), aggettive- as-name (fresh like a rose), aggettive- conjunction-aggettive (healthy and safe), name-conjunction- name (card and pen), verboaggettive (cost expensive). For each location the Juilland index of dispersion and use (Bortolini et al., 1971) was calculated on the basis of the selected final location. The question is how to define them. In this context, the idea of the use of crowdsourcing, boring of ConParoleTue, was born.
|
The described experiment, which concerns crowdsourcing for the acquisition of Italian locations edited here, has proved effective both from the quantitative point of view (more than 3200 definitions five months) and from that of their appropriate thesis to a audience of learners to with definitions edited by a team of les grafi has highlighted the most intuitive and natural character of the definitions of non-specialists, despite the greater abstractness of the definitions of professionals. The results of the writings lead to continuing the writing of the dictionary through this crowdsourcing-based methodology.
| 8
|
Learner Corpora and Language Acquisition
|
39_2014
| 2,014
|
Iryna Haponchyk, Alessandro Moschitti
|
Making Latent SVMstruct Practical for Coreference Resolution
|
ENG
| 2
| 1
| 1
|
Università di Trento, Qatar Computing Research Institute
| 2
| 1
| 0
| 1
|
Alessandro Moschitti
| 0
|
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
The recent work on coreference resolution has shown a renewed interest in the structured perceptron model, which seems to achieve the state of the art in this field. Interestingly, while SVMs are known to generally provide higher accu-racy than a perceptron, according to pre-vious work and theoretical findings, no re-cent paper currently describes the use of SVMstruct for coreference resolution. In this paper, we address this question by solving some technical problems at both theoretical and algorithmic level enabling the use of SVMs for coreference resolu-tion and other similar structured output tasks (e.g., based on clustering).
|
Coreference resolution (CR) is a complex task, in which document phrases (mentions) are partitioned into equivalence sets. It has recently been approached by applying learning algorithms operating in structured output spaces (Tsochantaridis et al., 2004). Considering the nature of the problem, i.e., the NP-hardness of finding optimal mention clusters, the task has been reformulated as a spanning graph problem. First, Yu and Joachims (2009) proposed to (i) represent all possible mention clusters with fully connected undirected graphs and (ii) infer document mention cluster sets by applying Kruskal’s spanning algorithm (Kruskal, 1956). Since the same clustering can be obtained from multiple spanning forests (there is no one-to-one correspondence), these latter are treated as hidden or latent variables. Therefore, an extension of the structural SVM – Latent SVMstruct (LSVM) – was designed to include these structures in the learning procedure. Later, Fernandes et al. (2012) presented their CR system having a resembling architecture. They do inference on a directed candidate graph using the algorithm of Edmonds (1967). This modeling coupled with the latent structured perceptron delivered state-of-the-art results in the CoNLL-2012 Shared Task (Pradhan et al., 2012). To the best of our knowledge, there is no previous work on a comparison of the two methods, and the LSVM approach of Yu and Joachims has not been applied to the CoNLL data. In our work, we aim, firstly, at evaluating LSVM with respect to the recent benchmark standards (corpus and evaluation metrics defined by the CoNLL-shared task) and, secondly, at understanding the differences and advantages of the two structured learning models. In a closer look at the LSVM implementation1, we found out that it is restricted to inference on a fully-connected graph. Thus, we provide an extension of the algorithm enabling to operate on an arbitrary graph: this is very important as all the best CR models exploit heuristics to prefilter edges of the CR graph. Therefore our modification of LSVM allows us to use it with powerful heuristics, which greatly contribute to the achievement of the state of the art. Regarding the comparison with the latent perceptron of Fernandes et al. (2012), the results of our experiments provide evidence that the latent trees derived by Edmonds’ spanning tree algorithm better capture the nature of CR. Therefore, we speculate that the use of this spanning tree algorithm within LSVM may produce higher results than those of the current perceptron algorithm.
|
We have performed a comparative analysis of the structured prediction frameworks for coref-erence resolution. Our experiments reveal that the graph modelling of Fernandes et al. and Ed-monds’ spanning algorithm seem to tackle the task more specifically. As a short-term future work, we intend to verify if LSVM benefits from us-ing Edmonds’ algorithm. We have also enabled the LSVM implementation to operate on partial graphs, which allows the framework to be com-bined with different filtering strategies and facili-tates its comparison with other systems.
| 7
|
Lexical and Semantic Resources and Analysis
|
40_2014
| 2,014
|
Manuela Hürlimann, Raffaella Bernardi, Denis Paperno
|
Nominal Coercion in Space: Mass/Count Nouns and Distributional Semantics
|
ENG
| 3
| 2
| 1
|
Università di Trento
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
Theoretical linguists analyse all nouns as either mass or count, but ad-mit that noun meanings can be shifted from one class to the other and classify these shifts. We use distributional seman-tic models to check how the theoretical analysis of mass-count meaning shifts re-lates to the actual usage of the nouns.
|
It is generally assumed that if a mass (count) noun is used in a count (resp. mass) context, its meaning changes. Compare example (1), where wine is used in a mass context (as a bare singular; denoting a substance) to (2), where the use of the determiner three indicates a count usage, shifting its interpretation to types of wine. (1) I like wine. (2) Three wines grow in this region. The same phenomenon can also be observed for count nouns: in example (3), apple is used in its more frequent count sense, while its bare usage in example (4) constitutes a mass usage with a slightly changed meaning — the focus is not on individual, whole apples as in the countable example, but on their material/substance. (3) I bought five apples at the market. (4) There is apple in the salad. Data-based approaches to the mass/count phenomenon include Baldwin and Bond (2003), who classify nouns into five countability types based on lexico-syntactic features and Ryo Nagata et al. (2005), who use context words to distinguish between mass and count nouns. Katz and Zamparelli (2012) were the first to study mass/count elasticity using distributional semantic models. First of all, they dispelled the view that there is a clear count/mass dichotomy: like in the examples above, many nouns which appear frequently in count contexts also appear frequently in mass contexts. Hence, rather than making a binary distinction (count vs. mass nouns), we should speak of predominantly count (resp., predominantly mass) nouns, i.e., nouns which occur more frequently in count (resp. mass) contexts than in mass (resp., count) contexts. Moreover, Katz and Zamparelli (2012) take pluralisation as a proxy for count usage and conjecture that for predominantly count nouns the similarity between singular and plural is higher than for predominantly mass nouns since the latter undergo a shift whereas the former do not. This conjecture finds quantitative support in their data – the 2- billion word ukWaC corpus.1 We wonder whether other factors, such as polysemy, have an impact on this quantitative analysis and we further investigate nominal coercion by also considering the abstract vs. concrete dimension and polysemy. Katz and Zamparelli (2012) notice that while plurals are invariably count, singulars can be a mixture of mass and count usages, and propose to use syntactic contexts to disambiguate mass and count usages in future studies. We take up their suggestion and look at coercion using vector representations of mass vs. count usages. According to the linguistic literature (Pelletier (1975)), instances of coercion fall into several shift classes. In this view, coerced nouns move towards a particular “destination”: Container shift: Liquids (mass) are coerced into countable quantities contained in containers: “two beers, please!” Kind shift: Masses are coerced into a kind reading: “three wines grow in this region” Food shift: Animal nouns are coerced into a mass food meaning: “there was chicken in the salad” Universal grinder: Countables are coerced into a mass reading: “after the accident, there was dog all over the street” We wonder whether these shift classes can be identified in the semantic space. Thus, we propose a simple experiment in which we assess whether the count usage vectors of typical mass nouns move towards (=become more similar to) these suggested destinations. In sum, we address the following research questions: (1) Do nouns undergo noticeable shifts – and if so, what factors have an impact? (2) Can we interpret the destination of a shift in terms of standard shift classes?
|
We have seen how Distributional Semantics Models (DSMs) can be applied to investigate nominal coercion. DSMs can capture some aspects of mass/count noun meaning shifts, such as the fact that predominantly mass nouns undergo greater meaning shifts than predominantly count nouns when pluralised. We also find that abstractness and polysemy have an impact on singular-plural distance: abstract nouns and highly polysemous nouns have a greater singular-plural distance than concrete and monosemous nouns, respectively. Furthermore, our second experiment shows that coercion lies mainly in cases where both the mass and count usage vectors stay close to the averaged noun meaning. However, as our toy evaluation of clear cases of container and kind coercion shows, the direction of the shift can be differentiated based on usage vectors.
| 7
|
Lexical and Semantic Resources and Analysis
|
41_2014
| 2,014
|
Claudio Iacobini, Aurelio De Rosa, Giovanna Schirato
|
Part-of-Speech tagging strategy for MIDIA: a diachronic corpus of the Italian language
|
ENG
| 3
| 1
| 0
|
Università di Salerno
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Salerno
|
The realization of MIDIA (a bal-anced diachronic corpus of written Italian texts ranging from the XIII to the first half of the XX c.) has raised the issue of developing a strategy for PoS tagging able to properly analyze texts from different textual genres belonging to a broad span of the history of the Italian language. The paper briefly de-scribes the MIDIA corpus it focuses on the improvements to the contemporary Italian parameter file of the PoS tagging program Tree Tagger, made to adapt the software to the analysis of a textual basis characterized by strong morpho-syntactic and lexical variation; and, finally, it outlines the reasons and the advantages of the strategies adopted.
|
The realization of MIDIA, a balanced diachronic corpus of Italian, raised the issue of the elaboration of a strategy of analysis of texts from different genres and time periods in the history of Italian. This temporal and textual diversity involves both a marked graphic, morphological and lexical variation in word forms, and differences in the ordering of the PoS. The program chosen for the PoS tagging is Tree Tagger (cf. Schmid 1994, 1995), and the parameter file, made of a lexicon and a training corpus, is the one developed by Baroni et al (2004) for contemporary Italian. The strategy we developed for the adjustement of the PoS tagging to different diachronic varieties has been to greatly increase the lexicon with a large amount of word forms belonging predominantly to Old Italian, and not to retrain the program with texts belonging to previous temporal stages. This solution turned out to be economical and effective: it has allowed a significant improvement of the correct assignment of PoS for texts both old and modern, with a success rate equal to or greater than 95% for the tested texts, and an optimal use of human resources.
|
The strategy we devised to develop MIDIA PoS tagging for the analysis of texts belonging to different time periods and textual genres than that for which it was originally trained has proved to be successful and economical. Human resources have been concentrated on enriching the lexicon and on the review of automatic lexeme and PoS assignment. Our results show that a larger lexicon improves the analysis also for words adjacent to those recognized by the matching with the word forms listed in the lexicon. This has some interesting consequences both on the strategies for text tagging and on the implementation of the program Tree Tagger for the analysis of texts with a great range of variation. We plan to further enrich MIDIA lexicon by adding word forms from the corpus not yet listed in the lexicon.
| 7
|
Lexical and Semantic Resources and Analysis
|
42_2014
| 2,014
|
Elisabetta Jezek, Laure Vieu
|
Distributional analysis of copredication: Towards distinguishing systematic polysemy from coercion
|
ENG
| 2
| 2
| 1
|
Università di Pavia, CNRS, Université Toulouse III
| 2
| 1
| 0
| 1
|
Laure Vieu
| 0
|
0
|
France, Italy
|
Toulouse, Pavia
|
In this paper we argue that the account of the notion of complex type based on copredication tests is problem-atic, because copredication is possible, albeit less frequent, also with expres-sions which exhibit polysemy due to co-ercion. We show through a distributional and lexico-syntactic pattern-based corpus analysis that the variability of copredica-tion contexts is the key to distinguish com-plex types nouns from nouns subject to co-ercion.
|
Copredication can be defined as a “grammatical construction in which two predicates jointly apply to the same argument” (Asher 2011, 11). We focus here on copredications in which the two predicates select for incompatible types. An example is (1): (1) Lunch was delicious but took forever. where one predicate (‘take forever’) selects for the event sense of the argument lunch while the other (‘delicious’) selects for the food sense. Polysemous expressions entering such copredication contexts are generally assumed to have a complex type (Pustejovsky 1995), that is, to lexically refer to entities “made up” of two (or more) components of a single type; it is thus assumed for example that lunch is of the complex type event food.1 Copredication as a defining criterion for linguistic expressions referring to complex types is, however, problematic, because copredication is possible, albeit less frequent, also with expressions which exhibit polysemy because of coercion, as in the case of the noun sandwich in such contexts as (2): (2) Sam grabbed and finished the sandwich in one minute. where the predicate grab selects for the simple type the noun sandwich is associated with (food), whereas finish coerces it to an event. The claim that the event sense exhibited by sandwich is coerced is supported by the low variability of event contexts in which sandwich appears (as opposed to lunch); see for example “during lunch” (780 hits for the Italian equivalent in our reference corpus, cf. section 3) vs. “*during the sandwich” (0 hits). Our goal is therefore twofold: evaluate whether at the empirical level it is possible to distinguish, among nouns appearing in copredication contexts, between complex types and simple (or complex) types subject to coercion effects; and propose a method to extract complex type nouns from corpora, combining distributional and lexicosyntactic pattern-based analyses. Our working hypothesis is that lexicalized complex types appear in copredication patterns more systematically, and so that high variability of pair of predicates in copredication contexts is evidence of complex type nouns, while low variability points to simple (or complex) type nouns subject to coercion effects. In the sections that follow, we will first raise the questions what counts as a copredication and what copredication really tell us about the underlying semantics of the nouns that support it. Then, we will introduce the experiments we conducted so far to verify our hypothesis. Finally, we will draw some conclusions and point at the experiments we have planned as future work.
|
We can therefore conclude that an experimental method to separate nouns of complex types from nouns subject to coercion appears possible. The proposed method constitutes the first attempt at semi-automatically extracting from corpus com- plex type nouns, something remaining elusive up to now. In addition, we learned that letter should be preferred over book as prototype of the com-plex type Info•Phys. In fact, this complex type is not the most straightforward since the dependence between the components of a dot object is not one-to-one. The case of Event • Food with lunch as prototype, in which there is such a tight symmetric dependence and no competition with separate sim-ple senses, might prove easier to deal with. This will be tackled in a next experiment. The predicate selection is a critical phase in the method proposed. It is difficult if not impossible to avoid polysemy and metaphorical uses, espe-cially since the relevant copredications are sparse and we cannot rely only on highly specialized un-frequent predicates. In future work, we plan to ex-periment with fully automatic selection, exploiting distributional semantics methods. Dimension re-duction through non-negative matrix factorization yields a possible interpretation of the dimensions in terms of “topics”, which is confirmed by ex-periments (Van de Cruys et al. 2011). Building on this, we shall check whether “topics” for predi-cates correspond to selectional restrictions suitable to build our copredication patterns.
| 7
|
Lexical and Semantic Resources and Analysis
|
43_2014
| 2,014
|
Fahad Khan, Francesca Frontini
|
Publishing PAROLE SIMPLE CLIPS as Linguistic Linked Open Data
|
ENG
| 2
| 1
| 0
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
This paper presents the ongo-ing project for the conversion and publi-cation of the Italian lexicon Parole Simple Clips in linked open data, illustrating the chosen model, with a particular focus on the translation of the syntactic and seman-tic information pertaining verbs and their predicates.
|
1 Introduction The aim of the present paper is to describe the ongoing conversion of the semantic layer of the Parole Simple Clips (PSC) lexical resource into linked open data. We have previously presented the conversion of the nouns in PSC in (Del Gratta et al., 2013). In this paper we will continue this work by presenting the model we intend to use for converting the verbs. In the next section we shall give a general back-ground on the linguistic linked open data (LLOD) cloud and discuss the importance of putting lex-ical resources on the cloud. We also discuss the lemon model which we have chosen as the basis of the conversion of the PSC resource. In the fol-lowing section we discuss PSC itself and give a brief overview of its structure. Finally in the last section we will outline how we intend to proceed with the conversion of the PSC verbs, illustrating the proposed schema with an example.
|
In this paper we have presented our model for rep-resenting the PSC verbs using the lemon model. As we have stated above this is currently work in progress. In the final paper the link to the public dataset will be provided.
| 7
|
Lexical and Semantic Resources and Analysis
|
44_2014
| 2,014
|
Alberto Lavelli
|
A Preliminary Comparison of State-of-the-art Dependency Parsers on the Italian Stanford Dependency Treebank
|
ENG
| 1
| 0
| 0
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
This paper reports the efforts in-volved in applying several state-of-the-art dependency parsers on the Italian Stanford Dependency Treebank (ISDT). The aim of such efforts is twofold: first, to compare the performance and choose the parser to participate in the EVALITA 2014 task on dependency parsing; second, to investigate how simple it is to apply freely available state-of-the-art dependency parsers to a new language/treebank.
|
Recently, there has been an increasing interest in dependency parsing, witnessed by the organisation of a number of shared tasks, e.g. Buchholz and Marsi (2006), Nivre et al. (2007). Concerning Italian, there have been tasks on dependency parsing in all the editions of the EVALITA evaluation campaign (Bosco et al., 2008; Bosco et al., 2009; Bosco and Mazzei, 2011; Bosco et al., 2014). In the 2014 edition, the task on dependency parsing exploits the Italian Stanford Dependency Treebank (ISDT), a new treebank featuring an annotation based on Stanford Dependencies (de Marneffe and Manning, 2008). This paper reports the efforts involved in applying several state-of-the-art dependency parsers on ISDT. There are at least two motivations for such efforts. First, to compare the results and choose the parsers to participate in the EVALITA 2014 task on dependency parsing. Second, to investigate how simple it is to apply freely available state-of-the-art dependency parsers to a new language/ treebank following the instructions available together with the code and possibly having a few interactions with the developers. As in many other NLP fields, there are very few comparative articles when the performance of different parsers is compared. Most of the papers simply present the results of a newly proposed approach and compare them with the results reported in previous articles. In other cases, the papers are devoted to the application of the same tool to different languages/treebanks. It is important to stress that the comparison concerns tools used more or less out of the box and that the results cannot be used to compare specific characteristics like: parsing algorithms, learning systems, . . .
|
In the paper we have reported on work in progress on the comparison between several state-of-the-art dependency parsers on the Italian Stanford Dependency Treebank (ISDT). In the near future, we plan to widen the scope of the comparison including more parsers. Finally, we will perform an analysis of the results obtained by the different parsers considering not only their performance but also their behaviour in terms of speed, CPU load at training and parsing time, ease of use, licence agreement, . . .
| 4
|
Syntax and Dependency Treebanks
|
45_2014
| 2,014
|
Alessandro Lenci, Gianluca E. Lebani, Sara Castagnoli, Francesca Masini, Malvina Nissim
|
SYMPAThy: Towards a comprehensive approach to the extraction of Italian Word Combinations
|
ENG
| 5
| 3
| 0
|
Università di Pisa, Università di Bologna
| 3
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa, Bologna
|
The paper presents SYMPAThy, a new approach to the extraction of Word Combinations. The approach is new in that it combines pattern-based (P-based) and syntax-based (S-based) methods in or-der to obtain an integrated and unified view of a lexeme’s combinatory potential.
|
The term Word Combinations (WOCs), as used here, broadly refers to the range of combinatory possibilities typically associated with a word. On the one hand, it comprises so-called Multi-word Expressions (MWEs), intended as a variety of recurrent word combinations that act as a single unit at some level of linguistic analysis (Calzolari et al., 2002; Sag et al., 2002; Gries, 2008): they include phrasal lexemes, idioms, collocations, etc. On the other hand, WOCs also include the preferred distributional interactions of a word (be it a verb, a noun or an adjective) with other lexical entries at a more abstract level, namely that of argument structure patterns, subcategorization frames, and selectional preferences. Therefore, WOCs include both the normal combinations of a word and their idiosyncratic exploitations (Hanks, 2013). The full combinatory potential of a lexical entry can therefore be defined and observed at the level of syntactic dependencies and at the more constrained surface level. In both theory and practice, though, these two levels are often kept separate. Theoretically, argument structure is often perceived as a “regular” syntactic affair, whereas MWEs are characterised by “surprising properties not predicted by their component words” (Baldwin and Kim, 2010, 267). At the pratical level, in order to detect potentially different aspects of the combinatorics of a lexeme, different extraction methods are used – i.e. either a surface, patternbased (P-based) method or a deeper, syntax-based (S-based) method – as their performance varies according to the different types of WOCs/MWEs (Sag et al., 2002; Evert and Krenn, 2005). We argue that, in order to obtain a comprehensive picture of the combinatorial potential of a word and enhance extracting efficacy for WOCs, the P-based and S-based approaches should be combined. Thus, we extracted corpus data into a database where both P-based and S-based information is stored together and accessible at the same time. In this contribution we show its advantages. This methodology has been developed on Italian data, within the CombiNet1 project, aimed at building an online resource for Italian WOCs.
|
In this paper we presented SYMPAThy, a new method for the extraction of WOCs that exploits a variety of information typical of both P-based and S-based approaches. Although SYMPAThy was developed on Italian data, it can be adapted to other languages. In the future, we intend to exploit this combinatory base to model the gradient of schematicity/productivity and fixedness of combinations, in order to develop an “WOC-hood” indicator to classify the different types of WOCs on the basis of their distributional behavior.
| 7
|
Lexical and Semantic Resources and Analysis
|
46_2014
| 2,014
|
Eleonora Lisi, Emanuele Donati, Fabio Massimo Zanzotto
|
Più l'ascolto e più Mi piace! Social media e Radio: uno studio preliminare del successo dei post
|
ITA
| 3
| 1
| 1
|
Università di Roma Tor Vergata, Radio Dimensione Suono
| 3
| 0
| 0
| 0
|
0
| 1
|
Emanuele Donati
|
Italy
|
Rome
|
The radio is at a point of non-return and fight a strange beat-glia with the new mass media. In this article we want to analyze how radio can take advantage of social networks. In part-lare, we want to try to find a useful strategy for radio speakers to propose successful posts on platforms such as Facebook. So, we will ana-lize stylistically and linguistically a corpus of posts written by the spea-kers of a radio to correlate these ca-ratterists with the success of the post itself in terms of views and like.
|
The radio was introduced in Italy as a medium of mass communication in 1924 and was the master of the Italian era until in 1954 the television made its first wave. In fact, radio has always had to fight with means generated by an evolving technology. With the television, whose subscribers have grown very quickly (Fonti Istat; Ortoleva & Scaramucci, 2003) the radio has found an agreement. Important technological innovations have diversified radio from television in the 1950s and 1960s. The FM allowed a relatively low-cost multiplication of the broadcasting stations, allowing the at least partial overcoming of the EgeneralistaÓ model of the TV in favour of a wider and varied offer of targeted programs; the transistor allowed the Radio to conquer spaces outside the domestic environment, while the birth of the radio allowed to follow the listeners even in their daily movements. Parallel to these new technologies, a new youth culture develops around the Radio, animated by the rolling rhythm of the Rock’n’Roll (medium of the Fifty Years) and inflated by the charm of disc reproduction (Monteleone, 2011). In the course of fifteen to twenty years from the first claims of the television media, the Radio had diversified its offer in terms of content and palinsesti. The radio fills the time spaces left free from the TV (morning hours and much of the afternoon hours). The personal, mobile, relatively distracted enjoyment became the underground and accompanying other daily activities, ending to delineate a characteristic trait of the relationship with the public, which became more intimate and deep (Menduni, 2003). This new dimension of listening leaves within a short time to realize the possibility of exploiting an ancient but great resource that, almost paradoxically, would give Radio the face of an innovative means: it is the telephone cable, a new channel through which to minimize the distances with the audience and give start to the age of interaction. On 7 January 1969 and ̃ in wave at 10:40 the first point of radio broadcast ÈChiamate Rome 31-31Ó. There was an interaction with the public. In these years, Radio is forced to fight against a new enemy that could become its ally: the Web in its new version of Social Networks. As in the 50s and 60s the Radio reinvented itself by starting the interaction with the public, as in these years the Radio could exploit the Social Networks to reinvent itself. In this article we want to analyze how radio can take advantage of social networks. In particular, we want to try to propose a strategy to radio speakers to propose successful posts on platforms such as Facebook. So, we will analyze stylistically and linguistically a radio speakers post corpus to correlate these features with the success of the post itself in terms of views and likes. From this, we will try to derive some guidelines for writing successful posts. The rest of the article is organized as follows: Section 2 describes the method of stylistic, linguistic and content analysis of posts. Section 3 analyses the results on a set of radio speakers posts.
|
The results and observations obtained from this initial stude tend to partly confirm the effect-active impact of some parameters on the success of a post. In several cases for ÃÉ, the data obtained deli-neano new and unexpected scenarios. They are just re-sulted like these to lead us to a refles-sion on the substantial differences between the size on line and that on air of the radio, such as the man-cance (in the first case) of palinsesti and boundaries related to the schedules and then to the rou-tine of daily dates, and the par-ticular mechanism of flow, generated in primis by the coincidence and the unpredictable. The presented study can be a basis for co-building a predictive system capable of predicting whether a post can be successful as those used to predict film ratings (Pang, Lee, & Vaithyanathan, 2002).
| 6
|
Sentiment, Emotion, Irony, Hate
|
47_2014
| 2,014
|
Simone Magnolini, Bernardo Magnini
|
Estimating Lexical Resources Impact in Text-to-Text Inference Tasks
|
ENG
| 2
| 0
| 0
|
Università di Brescia, Fondazione Bruno Kessler
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Brescia, Trento
|
This paper provides an empiri-cal analysis of both the datasets and the lexical resources that are commonly used in text-to-text inference tasks (e.g. textual entailment, semantic similarity). Accord-ing to the analysis, we define an index for the impact of a lexical resource, and we show that such index significantly corre-lates with the performance of a textual en-tailment system.
|
In the last decade text-to-text semantic inference has been a relevant topic in Computational Lin-guistics. Driven by the assumption that language understanding crucially depends on the ability to recognize semantic relations among portions of text, several text-to-text inference tasks have been proposed, including recognizing paraphrasing (Dolan and Brockett., 2005), recognizing textual entailment (RTE) (Dagan et al., 2005), and se-mantic similarity (Agirre et al., 2012). A common characteristic of such tasks is that the input are two portions of text, let’s call them Text1 and Text2, and the output is a semantic relation between the two texts, possibly with a degree of confidence of the system. For instance, given the* following text fragments: *Text1: George Clooneys longest relationship ever might have been with a pig. The actor owned Max, a 300-pound pig. Text2: Max is an animal.* *a system should be able to recognize that there is an ”entailment” relation among Text1 and Text2.* *While the task is very complex, requiring in principle to consider syntax, semantics and also pragmatics, current systems adopt rather sim-plified techniques, based on available linguistic resources. For instance, many RTE systems (Da-gan et al., 2012) would attempt to take advantage of the fact that, according to WordNet, the word animal in Text2 is a hypernym of the word pig in Text1. A relevant aspect in text-to-text tasks is that datasets are usually composed of textual pairs for positive cases, where a certain relation occurs, and negative pairs, where a semantic relation doesn’t appear. For instance, the following pair:* *Text1: John has a cat, named Felix, in his farm, it’s a Maine Coon, it’s the largest domesticated breed of cat. Text2: Felix is the largest domesticated animal in John’s farm.* shows a case of ”non-entailment”. In the paper we systematically investigate the re-lations between the distribution of lexical associa-tions in textual entailment datasets and the system performance. As a result we define a ”resource impact index” for a certain lexical resource with respect to a certain dataset, which indicates the capacity of the resource to discriminate between positive and negative pairs. We show that the ”re-source impact index” is homogeneous across sev-eral datasets and tasks, and that it correlates with the performance of the algorithm we chose in our experiments.
|
We have proposed a method for estimating the im-pact of a lexical resource on the performance of a text-to-text semantic inference system. The start-ing point has been the definition of the RID index, which captures the intuition that in current datasets useful resources need to discriminate between pos-itive and negative pairs. We have then shown that the RID index is highly correlated with the accu-racy of the system for balanced datasets and with the F1 for the unbalanced one, a result that allows to use the RID as a reliable indicator of the im-pact of a resource. As for future work, we intend to further general-ize our current findings applying the same method-ology to different text-to-text inference algorithms, starting from those already available in the EX-CITEMENT Open Platform. We also want to con-duct experiment on operation, like summing, with this index to describe to combined effect of differ-ent resources.
| 22
|
Distributional Semantics
|
48_2014
| 2,014
|
Alice Mariotti, Malvina Nissim
|
Parting ways with the partitive view: a corpus-based account of the Italian particle \ne\""
|
ENG
| 2
| 2
| 1
|
Università di Bologna, University of Groningen
| 2
| 1
| 0
| 1
|
Malvina Nissim
| 0
|
0
|
Netherlands, Italy
|
Groningen, Bologna
|
The Italian clitic “ne” is traditionally described as a partitive particle. Through an an- notation exercise leading to the creation of a 500 instance dataset, we show that the partitive fea- ture isn’t dominant, and the anaphoric properties of “ne”, syntactically and semantically, are what we should focus on for a comprehensive picture of this particle, also in view of its computational treatment.
|
The Italian particle “ne” is a clitic pronoun. Tradi-tionally, linguistic accounts of “ne” focus on two of its aspects: its syntactic behaviour and its being a conveyor of partitive relations. Syntactically, this particle has been studied extensively, especially in connection with unac-cusative verbs (Belletti and Rizzi, 1981; Burzio, 1986; Sorace, 2000). In Russi’s volume specifically dedicated to clitics, the chapter devoted to “ne” only focuses on the grammaticalisation process which brought the clitic to be incorporated in some verbs, causing it to lose its pronominal properties. It is referred to as “the ‘partitive’ ne” (Russi, 2008, p. 9). In (Cordin, 2001), the clitic is described in detail, and shown to serve three main uses. It can be a partitive pronoun, usually followed by a quantifier, as in (1). It can be used purely anaphorically to refer to a previously introduced entity, such as “medicine” in (2). The third use is as a locative adverb, like in (3).1 (1) Quanti giocatori di quell’U17-U19 quest’anno o l’anno scorso hanno giocato minuti importanti in prima squadra? A me ne risultano 2 o 3. How many players of that U17-U19 [team] this year or last year have played important minutes in the first team? I think 2 or 3 [of them] (2) Tu sai che la medicina fa bene e pretendi che il palato, pur sentendone l’amaro, continui a gustarla come se fosse dolce. You know that the medicine is good for you, and you ask your palate to enjoy it as if it was sweet, in spite of tasting [its] bitterness. (3) Me ne vado. I’m leaving. Note that for both partitive and non-partitive uses, in order to interpret the ne, the antecedent must be identified (“players of that U17-U19 [team]” in (1) and “medicine” for (2)). While there has been a recent effort to highlight the anaphoric properties of real occurrences of “ne” (Nissim and Perboni, 2008), there isn’t as yet a comprehensive picture of this particle. In this paper, we contribute a series of annotation schemes that capture the anaphoric nature of “ne”, and account for the different kinds of relations it establishes with its antecedent. We also contribute an annotated dataset that can be used for training automatic resolution systems, and that as of now provides us with a picture of this particle which is the most comprehensive to date.
|
Actual corpus data, annotated thanks to the development of specific annotation schemes focused on the anaphoric potential of “ne”, shows that the function of “ne” cannot be at all limited to a ‘partitive’ pronoun or as a test for certain syntactic types, as it is usually done in the theoretical literature. It also highlights several aspect of the anaphoric properties of “ne”, both semantically and syntactically. We plan to exploit the dataset to develop an automatic resolution system.
| 7
|
Lexical and Semantic Resources and Analysis
|
49_2014
| 2,014
|
Alessandro Mazzei
|
On the lexical coverage of some resources on Italian cooking recipes
|
ENG
| 1
| 0
| 0
|
Università di Torino
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Turin
|
We describe an experiment de-signed to measure the lexical coverage of some resources over the Italian cooking recipes genre. First, we have built a small cooking recipe dataset second, we have done a qualitative morpho-syntactic analysis of the dataset and third we have done a quantitative analysis of the lexical coverage of the dataset.
|
The study reported in this paper is part of an applicative project in the field of nutrition. We are designing a software service for Diet Management (Fig. 1) that by using a smartphone allows one to retrieve, analyze and store the nutrition information about the courses. In our hypothetical scenario the interaction between the man and the food is mediated by an intelligent recommendation system that on the basis of various factors encourages or discourages the user to eat that specific course. The main factors that the system needs to account for are: (1) the diet that you intend to follow, (2) the food that have been eaten in the last days and, (3) the nutritional values of the ingredients of the course and its specific recipe. Crucially, in order to extract the complete salient nutrition information from a recipe, we need to analyze the sentences of the recipe. To allow the execution of this information extraction task we intend to use a syntactic parser together with a semantic interpreter. However, we intend to use both a deep interpreter, which have showed to have good performances in restricted domains (Fundel et al., 2007; Lesmo et al., 2013), as well as a shallow interpreter, that are most widely used in practical applications (Manning et al., 2008). In order to optimize the information extraction task we need to evaluate the specific features of the applicative domain (Fisher, 2001; Jurafsky, 2014). In the next Sections we present a preliminary study towards the realization of our NLP system, i.e. a linguistic analysis of the Italian recipes domain.
|
In this paper we presented a preliminary study on cooking recipes in Italian. The qualitative analysis emphatizes the importance of the sentence splitter and of the PoS tagger for a correct morpho-syntactic analysis. From the quantitative lexical coverage analysis we can draw a number of speculations. First, there is a great linguistic variation among cookbooks. Second, general lexical resources outperform domain specific resources with respect to lexical coverage. Third, the lemmatization can improve the recall of the algorithm with respect to the lexical resource.
| 9
|
Textual Genres & Literature Linguistics
|
50_2014
| 2,014
|
Anne-Lyse Minard, Alessandro Marchetti, Manuela Speranza
|
Event Factuality in Italian: Annotation of News Stories from the Ita-TimeBank
|
ENG
| 3
| 2
| 1
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
In this paper we present ongoing work devoted to the extension of the Ita-TimeBank (Caselli et al., 2011) with event factuality annotation on top of TimeML annotation, where event factuality is rep-resented on three main axes: time, polarity and certainty. We describe the annotation schema proposed for Italian and report on the results of our corpus analysis.
|
In this work, we propose an annotation schema for factuality in Italian adapted from the schema for English developed in the NewsReader project1 (Tonelli et al., 2014) and describe the annotation performed on top of event annotation in the Ita-TimeBank (Caselli et al., 2011). We aim at the creation of a reference corpus for training and test-ing a factuality recognizer for Italian. The knowledge of the factual or non-factual na-ture of an event mentioned in a text is crucial for many applications (such as question answer-ing, information extraction and temporal reason-ing) because it allows us to recognize if an event refers to a real or to hypothetical situation, and en-ables us to assign it to its time of occurrence. In particular we are interested in the representation of information about a specific entity on a timeline, which enables easier access to related knowledge. The automatic creation of timelines requires the detection of situations and events in which target entities participate. To be able to place an event on a timeline, a system has to be able to select the events which happen or that are true at a certain point in time or in a time span. In a real context (such as the context of a newspaper article), the situations and events mentioned in texts can refer to real situations in the world, have no real coun-terpart, or have an uncertain nature. The FactBank guidelines are the reference guidelines for factuality in English and FactBank is the reference corpus (Sauri and Pustejovsky, 2009). More recently other guidelines and resources have been developed (Wonsever et al., 2012; van Son et al., 2014), but, to the best of our knowledge, no resources exist for event factuality in Italian.
|
In this paper we have presented an annotation schema of event factuality in Italian and the an-notation task done on the Ita-TimeBank. In our schema, factuality information is represented by three attributes: time of the event, polarity of the statement and certainty of the source about the event. We have selected from the Ita-TimeBank 170 documents containing 10,205 events and we have annotated them following the pro-posed annotation schema. The annotated corpus is freely available for non commer-cial purposes from https://hlt.fbk.eu/ technologies/fact-ita-bank. The resource has been used to develop a system based on machine learning for the automatic iden-tification of factuality in Italian. The tool has been evaluated on a test dataset and obtained 76.6% ac-curacy, i.e. the system identified the right value of the three attributes in 76.6% of the events. This system will be integrated in the TextPro tool suite (Pianta et al., 2008).
| 7
|
Lexical and Semantic Resources and Analysis
|
51_2014
| 2,014
|
Johanna Monti
|
An English-Italian MWE dictionary
|
ENG
| 1
| 1
| 1
|
Università degli Studi di Sassari
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Sassari
|
The translation of Multiword Ex-pressions (MWEs) requires the knowledge of the correct equivalent in the target language which is hardly ever the result of a literal translation. This paper is based on the as-sumption that the proper treatment of MWEs in Natural Language Processing (NLP) appli-cations and in particular in Machine Transla-tion and Translation technologies calls for a computational approach which must be, at least partially, knowledge-based, and in par-ticular should be grounded on an explicit lin-guistic description of MWEs, both using an electronic dictionary and a set of rules. The hypothesis is that a linguistic approach can complement probabilistic methodologies to help identify and translate MWEs correctly since hand-crafted and linguistically-motivated resources, in the form of electronic dictionaries and local grammars, obtain accu-rate and reliable results for NLP purposes. The methodology adopted for this research work is based on (i) Nooj, an NLP environ-ment which allows the development and test-ing of the linguistic resources, (ii) an electron-ic English-Italian MWE dictionary, (iii) a set of local grammars. The dictionary mainly consists of English phrasal verbs, support verb constructions, idiomatic expressions and col-locations together with their translation in Ital-ian and contains different types of MWE POS patterns.
|
This paper presents a bilingual dictionary of MWEs from English to Italian. MWEs are a complex linguistic phenomenon, ranging from lexical units with a relatively high degree of internal variability to expressions that are frozen or semi-frozen. They are very frequent and productive word groups both in everyday languages and in languages for special purposes and are the result of human creativity which is not ruled by algorithmic processes, but by very complex processes which are not fully representable in a machine code since they are driven by flexibility and intuition. Their interpretation and translation sometimes present unexpected obstacles mainly because of inherent ambiguities, structural and lexical asymmetries between languages and, finally, cultural differences. The identification, interpretation and translation of MWEs still represent open challenges, both from a theoretical and a practical point of view, in the field of Machine Translation and Translation technologies. Empirical approaches bring interesting complementary robustness-oriented solutions but taken alone, they can hardly cope with this complex linguistic phenomenon for various reasons. For instance, statistical approaches fail to identify and process non high-frequent MWEs in texts or, on the contrary, they are not able to recognise strings of words as single meaning units, even if they are very frequent. Furthermore, MWEs change continuously both in number and in internal structure with idiosyncratic morphological, syntactic, semantic, pragmatic and translational behaviours. The main assumption of this paper is that the proper treatment of MWEs in NLP applications calls for a computational approach which must be, at least partially, knowledge-based, and in particular should be grounded on an explicit linguistic description of MWEs, both using a dictionary and a set of rules. The methodology adopted for this research work is based on: (I) Nooj an NLP environment which allows the development and testing of the linguistic resources, (ii) an electronic English-Italian (EI) MWE dictionary, based on an accurate linguistic description that accounts for different types of MWEs and their semantic properties by means of well-defined steps: identification, interpretation, disambiguation and finally application, (iii) a set of local grammars.
|
In conclusion, the focus of this research for the coming years will be to improve the results ob-tained so far and to extend the research work to provide a more comprehensive methodology for MWE processing in MT and translation technol-ogies, taking into account not only the analysis phase but also the generation one. This experiment provides, on the one hand, an investigation of a broad variety of combinations of MWE types and an exemplification of their behaviour in texts extracted from different corpo-ra and, on the other hand, a representation method that foresees the interaction of an electronic dic-tionary and a set of local grammars to efficiently handle different types of MWEs and their proper-ties in MT as well as in other types of NLP appli-cations. This research work has therefore produced two main results in the field of MWE processing so far: the development of a first version of an English-Italian electronic dictionary, spe-cifically devoted to different MWEs types, the analysis of a first set of specific MWE structures from a semanto-syntactic point of view and the develop-ment of local grammars for the identifica-tion of continuous and discontinuous MWEs in the form of FST/FSA.
| 7
|
Lexical and Semantic Resources and Analysis
|
52_2014
| 2,014
|
Giovanni Moretti, Sara Tonelli, Stefano Menini, Rachele Sprugnoli
|
ALCIDE: An online platform for the Analysis of Language and Content In a Digital Environment
|
ENG
| 4
| 2
| 0
|
Fondazione Bruno Kessler, Università di Trento
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
This work presents ALCIDE (Analysis of Language and Content In a Digital Environment), a new platform for Historical Content Analysis. Our aim is to improve Digital Humanities studies in- tegrating methodologies taken from hu- man language technology and an easily understandable data structure representa- tion. ALCIDE provides a wide collection of tools that go beyond simple metadata indexing, implementing functions of tex- tual analysis such as named entity recog- nition, key-concept extraction, lemma and string-based search and geo-tagging.
|
In this paper we present ALCIDE (Analysis of Language and Content In a Digital Environment), a new platform for Historical Content Analysis. Our aim is to improve Digital Humanities stud- ies implementing both methodologies taken from both methodologies taken from human language technology and an easily under- standable data structure representation. ALCIDE provides a wide collection of tools that go be- yond text indexing, implementing functions of textual analysis such as: named entities recogni- tion (e.g. identification of names of persons and locations within texts, key-concept extraction, tex- tual search and geotagging). Every function and information provided by ALCIDE is time bounded and all query functions are related to this feature; the leitmotif of the portal can be summarized as: “All I want to know related to a time period”. Our work aims at providing a flexible tool com- bining automatic semantic analysis and manual annotation tailored to the temporal dimension of documents. The ALCIDE platform currently sup- ports corpus analysis of English and Italian docu- ments.
|
In this paper we described the general workflow and specific characteristics of the ALCIDE platform. In the future, we aim to improve the efficiency of current functionalities and to add new ones such as (i) identification of temporal expressions and events (and the extraction of relations between them), (ii) distributional semantic analysis (i.e. quantification and categorization of semantic similarities between linguistic elements) and (iii) sentiment analysis on statements and key-concepts. ALCIDE is already online but it is password protected. When the implementation stage will be more advanced, we will make it freely accessible and users will be allowed to upload their corpora in Excel, XML or TEI format and explore them with the platform. For the moment a video of ALCIDE demo is available at http://dh.fbk.eu/projects/alcideanalysis- language-and-content-digital-enviroment.
| 7
|
Lexical and Semantic Resources and Analysis
|
53_2014
| 2,014
|
Stefanos Nikiforos, Katia Lida Kermanidis
|
Inner Speech, Dialogue Text and Collaborative Learning in Virtual Learning Communities
|
ENG
| 2
| 1
| 0
|
Ionian University
| 1
| 1
| 1
| 2
|
Stefanos Nikiforos, Katia Lida Kermanidis
| 0
|
0
|
Greece
|
Corfu
|
Virtual Learning Communities offer new opportunities in education and set new challenges in Computer Supported Collabora- tive Learning. In this study, a detailed lin- guistic analysis in the discourse among the class members is proposed in five distinct test case scenarios, in order to detect whether a Virtual Class is a community or not. Com- munities are of particular importance as they provide benefits to students and effectively improve knowledge perception. This analysis is focused on two axes: inner speech and col- laborative learning as they both are basic fea- tures of a community.
|
Virtual Learning Communities (VLCs) constitute an aspect of particular importance for Computer Supported Collaborative Learning (CSCL). The stronger the sense of community is, the more effectively is learning perceived, resulting in less isolation and greater satisfaction (Rovai, 2002; Daniel et al, 2003; Innes, 2007). Strong feelings of community provide benefits to students by increasing 1) the commitment to group goals, 2) collaboration among them and 3) motivation to learn (Rovai, 2002). Virtual Classes (VCs) are frequently created and embodied in the learning procedure (Dillenbourg and Fischer, 2007). Nevertheless there are questions arising: Is every VC always a community as well? How can we detect the existence of a community? What are its idiosyncratic properties? Sharing of knowledge within a community is achieved through shared codes and language (Daniel et al, 2003; Stahl, 2000; Innes, 2007). Language is not only a communication tool; it also serves knowledge and information exchange (Dillenbourg and Fischer, 2007; Knipfer et al, 2009; Daniel et al, 2003; Bielaczyc and Collins, 1999). Communication and dialogue are in a privileged position in the learning process due to the assumption that knowledge is socially constructed (Innes, 2007). Collaborative learning (CL) is strongly associated with communities as it occurs when individuals are actively engaged in a community in which learning takes place through collaborative efforts (Stahl et al, 2006). This active engagement is achieved through public discussion, which is a central way for a community to expand its knowledge (Bielaczyc and Collins, 1999). Developing an understanding of how meaning is collaboratively constructed, preserved, and re-learned through the media of language in group interaction, is a challenge for CL theory (Daniel et al, 2003;Wells, 2002; Warschauer, 1997; Koschmann, 1999). Inner speech (IS) is an esoteric mental language, usually not outerly expressed, having an idiosyncratic syntax. When outerly expressed, its structure consists of apparent lack of cohesion, extensive fragmentation and abbreviation compared to the outer (formal) language used in most everyday interactions. Clauses keep only the predicate and its accompanying words, while the subject and its dependents are omitted. This does not lead to misunderstandings if the thoughts of the individuals are in accordance (they form a community). The more identical the thoughts of the individuals are, the less linguistic cues are used (Vygotsky, 2008; Socolov, 1972). Various works using discourse analysis have been presented in the CSCL field: some of them focus on the role of dialogue (Wells, 2002), others examine the relationship between teachers and students (Blau et al., 1998; Veermans and Cesareni, 2005), while others focus on the type of the language used (Maness, 2008; Innes, 2007), on knowledge building (Zhang et al., 2007), or on the scripts addressed (Kollar et al., 2005). Spanger et al. (2009) analyzed a corpus of referring expressions targeting to develop algorithms for generating expressions in a situated collaboration. Other studies use machine learning techniques in order to build automated classifiers of affect in chat logs (Brooks, 2013). Rovai (2002), examined the relationship between the sense of community and cognitive learning in an online educational environment. Daniel et al. (2003) explored how the notions of social capital and trust can be extended in virtual communities. Unlike these studies, the proposed approach, for the first time to the authors' knowledge, takes into account the correlation between community properties and both inner speech and collaborative learning features (Bielaczyc and Collins, 1999) by applying linguistic analysis to the discourse among class members as a means for community detection. To this end, the discourse of four different types of VCs is analyzed and compared against non-conversational language use.
|
Applying linguistic analysis to the discourse among the members of a VC can provide us with useful results. Combining the result of the two categories (inner speech and collaboration) we can get strong indications of community existence. Furthermore, results of the analysis can help us improve the design of the VCs. However there is room for future research, e.g. applying this model and evaluating it on a larger corpus and different case studies.
| 8
|
Learner Corpora and Language Acquisition
|
54_2014
| 2,014
|
Maria Palmerini, Renata Savy
|
Gli errori di un sistema di riconoscimento automatico del parlato. Analisi linguistica e primi risultati di un progetto di ricerca interdisciplinare
|
ITA
| 2
| 2
| 1
|
Università di Salerno. Cedat 85
| 2
| 0
| 0
| 0
|
0
| 1
|
Maria Palmerini
|
Italy
|
Salerno
|
The work presents the results of a work of classification and line-guistic analysis of errors of an automatic re-knowledge system (ASR), produced by Cedat’85. This is the first phase of a search to develop errors reduction strategies.
|
The research project was born from a collaboration-in between the University of Salerno and Cedat 85, the leading company in Italy in the self-matic treatment of the speaker. The purpose of the project is a accurate assessment of errors produced by an automatic speaker transcription system (ASR), passed to the sequel of a more detailed linguistic analysis and subsequent metadata. The most used word error rate (WER) estimate of an ASR system is calculated automatically and is based on the analysis of a manual transcription (aligned to the signal) and the relative transcription obtained by the ASR system. This comparison identifies the wrong words (Substitutions), the missing words (Deletetions) and the wrongly inserted words (Insertions) as well as the total words (N) for an assessment: WER = (S+D+I)x100 N This estimate does not enter into the cause n≈Ω of the relevance of the error, making it rather a maximum reference for a gross assessment of an ASR system, without any indication on its real utility and adequacy, n≈Ω on the possibilities of intervention and improvement. Most of the last-generation ASR systems, which work on spontanely speaking, use technologies and algorithms that can best exploit the enormous power of calculation currently available, but differ significantly in the choice of parameters, intermediate steps, in the selection criteria of most likely candidates, in the tools for the processing of ad-training data. A quality criterion, as well as quantitative, of error assessment is necessary for an adaptation of the system to the reference environment, and for the indication of possible improvement interventions. Recent studies, both technological and lin-guistic and psycholinguistic, indicate correlations between errors and frequency in the vocabulary or in the use of words, speaking speed, ambiguity (homophony) and acoustic confundability (minimum and sub-minimum couples). However, there are missing systematic studies that take into account the correlation with morph-lesical classes, phonological and sillabic structures, syntagmatic sequences, the order of the constituents and above all, prosodic factors. In this contribution we present a first part of the results of a broader research on the weight of these factors, suffering on the criteria of the language classification of data and on the correlations achieved between the presence (and type) of error and phono-morphological and morphosynthetic categories.
|
In this preliminary phase of analysis, it seems to be possible to draw a first important conclusion: the quantitative assessment of the word error rate surpasses the failure of recognition of an ASR system. The metadata made and the subsequent quality assessment normalizes the data of the WER and redirect the mag-the best rate to unpredictable phenomena as little significant for the measurement of the efficiency of the system. In this context, indeed, the indecision and confusion of graphic rendering are almost equal to self-matic and manual transcription. Nevertheless, the weight of the wrong recognitions of these segments can be reduced by adopting a thinner annotation scheme, both in terms of more solid standards for transcribers and as a model for the ASR system. We end up supposing that some secondary interventions on the phone set, the enrichment of the vocabulary with the possible phonetic variants, and a better treatment of prosodic phenomena could improve the performance of the system to some extent.
| 13
|
Multimodal
|
55_2014
| 2,014
|
Lucia C. Passaro, Alessandro Lenci
|
\Il Piave mormorava...\": Recognizing Locations and other Named Entities in Italian Texts on the Great War"
|
ENG
| 2
| 1
| 1
|
Università di Pisa
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
Increasing amounts of sources about World War I (WWI) are nowadays available in digital form. In this paper, we illustrate the automatic creation of a NE-annotated domain corpus used to adapt an existing NER to Italian WWI texts. We discuss the annotation of the training and test corpus and provide re-sults of the system evaluation.
|
Increasing amounts of sources about World War I (WWI) are nowadays available in digital form. The centenary of the Great War is also going to foster this trend, with new historical sources being digitized. This wealth of digital documents offers us an unprecedented possibility to achieve a multidimensional and multiperspectival insight on war events, understanding how soldiers and citizens of different countries and social conditions experienced and described the events in which they were involved together, albeit on opposite fronts and with different roles. Grasping this unique opportunity however calls for advanced methods for the automatic semantic analysis of digital historical sources. The application of NLP methods and tools to historical texts is indeed attracting growing interest and raises interesting and highly challenging research issues (Piotrowsky 2012). The research presented in this paper is part of a larger project dealing with the digitization and computational analysis of Italian War Bulletins of the First World War (for details see Boschetti et al. 2014). In particular, we focus here on the domain and language adaptation of a Named Entity Recognizer (NER) for Italian. As a byproduct of this project, we illustrate the automatic creation of a NE-annotated domain corpus used to adapt the NER to the WWI texts. War bulletins (WBs) were issued by the Italian Comando Supremo “Supreme Headquarters” during WWI and WWII as the official daily report about the military operations of the Italian armed forces. They are plenty of Named Entities, mostly geographical locations, often referring to small places unmarked in normal geographic maps or with their name changed during the last century because of geopolitical events, hence hardly attested in any gazetteer. To accomplish the Named Entity Recognition task, several approaches have been proposed such as Rule Based Systems (Grover et al., 2008; Mikheev et al., 1999a; Mikheev et al., 1999b), Machine Learning based (Alex et al., 2006; Finkel et al., 2005; Hachey et al., 2005; Nissim et al., 2004, including HMM, Maximum Entropy, Decision Tree, Support Vector Machines and Conditional Random Field) and hybrid approaches (Srihari et al., 2001). We used a Machine Learning approach to recognize NEs. Rule-based systems usually give good results, but require long development time by expert linguists. Machine learning techniques, on the contrary, use a collection of annotated documents for training the classifiers. Therefore the development time moves from the definition of rules to the preparation of annotated corpora. The problems of the NER in WWI bulletins are larger than those encountered in modern texts. The language used in such texts is early 20th century Italian, which is quite different from contemporary Italian in many respects and belongs to the technical and military jargon. These texts are therefore difficult to analyze using available Italian resources for NER, typically based on contemporary, standard Italian. Grover et al. (2008) describe the main problems encountered by NER systems on historical texts. They evaluated a rule-based NER system for person and place names on two sets of British Parliamentary records from the 17th and 19th centuries. One of the most important issues they had to deal was the gap between archaic and contemporary language. This paper is structured as follows: In section 2, we present the CoLingLab NER and in section 3 we describe its adaptation to WWI texts.
|
Location names play an important role in histori-cal texts, especially in those - like WBs - describ-ing the unfolding of military operations. In this paper, we presented the results of adapting an Italian NER to Italian texts about WWI through the automatic creation of a new NE-annotated corpus of WBs. The adapted NER shows a significantly increased ability to identify Locations. In the near feature, we aim at processing other types of texts about the Great War (e.g., letters, diaries and newspapers) as part of a more general project of information extraction and text mining of war memories.
| 7
|
Lexical and Semantic Resources and Analysis
|
56_2014
| 2,014
|
Marco Carlo Passarotti
|
The Importance of Being sum. Network Analysis of a Latin Dependency Treebank
|
ENG
| 1
| 0
| 0
|
Università Cattolica del Sacro Cuore
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Milan
|
Network theory provides a suitable framework to model the structure of language as a complex system. Based on a network built from a Latin dependency treebank, this paper applies methods for network analysis to show the key role of the verb sum (to be) in the overall structure of the network.
|
Considering language as a complex system with deep relations between its components is a widespread approach in contemporary linguistics (Briscoe, 1998; Lamb, 1998; Steels, 2000; Hudson, 2007). Such a view implies that language features complex network structures at all its levels of analysis (phonetic, morphological, lexical, syntactic, semantic). Network theory provides a suitable framework to model the structure of linguistic systems from such a perspective. Network theory is the study of elements, called vertices or nodes, and their connections, called edges or links. A complex network is a (un)directed graph G(V, E) which is given by a set of vertices V and a set of edges E (Ferrer i Cancho, 2010). Vertices and edges can represent different things in networks. In a language network, the vertices can be different linguistic units (for instance, words), while the edges can represent different kinds of relations holding between these units (for instance, syntactic relations). So far, all the network-based studies in linguistics have concerned modern and living languages (Mehler, 2008a). However, times are mature enough for extending such approach also to the study of ancient languages. Indeed, the last years have seen a large growth of language resources for ancient languages. Among these resources are syntactically annotated corpora (treebanks), which provide essential information for building syntactic language networks.
|
While the most widespread tools for querying and analyzing treebanks give results in terms of lists of words or sequences of trees, network analysis permits a synoptic view of all the relations that hold between the words in a treebank. This makes network analysis a powerful method to fully exploit the structural information provided by a treebank, for a better understanding of the properties of language as a complex system with interconnnected elements.
| 7
|
Lexical and Semantic Resources and Analysis
|
57_2014
| 2,014
|
Arianna Pipitone, Vincenzo Cannella, Roberto Pirrone
|
I-ChatbIT: an Intelligent Chatbot for the Italian Language
|
ENG
| 3
| 1
| 1
|
Università di Palermo
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Palermo
|
A novel chatbot architecture for the Italian language is presented that is aimed at implementing cognitive under-standing of the query by locating its cor-respondent subgraph in the agent’s KB by means of a graph matching strategy pur-posely devised. The FCG engine is used for producing replies starting from the se-mantic poles extracted from the candidate answers’ subgraphs. The system imple-ments a suitable disambiguation strategy for selecting the correct answer by analyz-ing the commonsense knowledge related to the adverbs in the query that is embed-ded in the lexical constructions of the ad-verbs themselves as a proper set of fea-tures. The whole system is presented, and a complete example is reported throughout the paper.
|
In recent years the Question-Answering systems (QAs) have been improved by the integration with Natural Language Processing (NLP) techniques, which make them able to interact with humans in a dynamic way: the production of answers is more sophisticated than the classical chatterbots, where some sentence templates are pre-loaded and linked to the specific questions. In this paper we propose a new methodology that integrates the chatterbot technology with the Cog-nitive Linguistics (CL) (Langacker, 1987) princi-ples, with the aim of developing a QA system that is able to harvest a linguistic knowledge from its inner KB, and use it for composing answers dy-namically. Grammatical templates and structures tailored to the Italian language that are construc-tions of the Construction Grammar (CxG) (Gold-berg, 1995) and a linguistic Italian source of verbs have been developed purposely, and used for the NL production. The result of the methodology im-plementation is I-ChatbIT, an Italian chatbot that is intelligent not only for the dynamic nature of the answers, but in the sense of cognitive under-standing and production of NL sentences. Cog-nitive understanding of the NL query is achieved by placing it in the system’s KB, which represents the conceptualization of the world as it has been perceived by the agent. The outcome of such a process is the generation of what we call the mean-ing activation subgraph in the KB. Browsing this subgraph, the system elaborates and detects the content of the answer, that is next grammatically composed through the linguistic base. The FCG engine is then used as the key component for pro-ducing the answer. Summarily, the work reports the modeling of the two tasks outlined above. The paper is arranged as follow: in the next section the most popular chatbots are shown, devoting par-ticular attention to the Italian ones. Section 3 de-scribes the implemented methodology explaining in detail a practical example. Finally, conclusions and future works are discussed in section 4.
|
A novel chatbot architecture for the Italian language has been presented that is aimed at implementing cognitive understanding of the query by locating its correspondent subgraph in the agent’s KB by means of a GED strategy based on the k- AT algorithm, and the Jaro–Winkler distance. The FCG engine is used for producing replies starting from the semantic poles extracted from the candidate answers’ subgraphs. The system implements a suitable disambiguation strategy for selecting the correct answer by analyzing the commonsense knowledge related to the adverbs in the query that is embedded in the lexical constructions of the adverbs themselves as a proper set of features. Future works are aimed at completing the IVS, and using explicit commonsense knowledge inside the KB for fine disambiguation. Finally, the graph matching strategy will be further tuned.
| 3
|
Chatbots and Dialogue Systems
|
58_2014
| 2,014
|
Vito Pirrelli, Claudia Marzi, Marcello Ferro
|
Two-dimensional Wordlikeness Effects in Lexical Organisation
|
ENG
| 3
| 1
| 0
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
The main focus of research on wordlikeness has been on how serial pro-cessing strategies affect perception of similarity and, ultimately, the global net-work of associative relations among words in the mental lexicon. Compara-tively little effort has been put so far, however, into an analysis of the reverse relationship: namely, how global organi-sation effects influence the speakers’ per-ception of word similarity and of words’ internal structure. In this paper, we ex-plore the relationship between the two dimensions of wordlikeness (the “syn-tagmatic” and the “paradigmatic” one), to suggest that the same set of principles of memory organisation can account for both dimensions.
|
The language faculty requires the fundamental ability to retain sequences of symbolic items, access them in recognition and production, find similarities and differences among them, and assess their degree of typicality (or WORDLIKE- NESS) with respect to other words in the lexicon. In particular, perception of formal redundancy appears to be a crucial precondition to morphol-ogy induction, epitomised by the so-called WORD ALI GNMENT problem. The problem arises when-ever one has to identify recurrence of the same pattern at different positions in time, e.g. book in handbook, or mach in both German macht and gemacht. Clearly, no “conjunctive” letter coding scheme (e.g., Coltheart et al. 2001; Harm & Seidenberg 1999; McClelland & Rumelhart 1981; Perry et al. 2007; Plaut et al. 1996), which requires that the representation of each symbol in a string be anchored to its position, would account for such an ability. In Davis’ (2010) SPATIAL ENCODING, the identity of the letter is described as a Gaussian activity function whose max value is centred on the letter’s actual position, enforcing a form of fuzzy matching, common to other models disjunctively encoding a symbol and its position (Grainger & van Heuven 2003; Henson 1998; Page & Norris 1998, among others). The role of specific within-word letter positions interacts with short-term LEXICAL BUFFERING and LEXICALITY effects. Recalling a stored representation requires that all symbols forming that representation are simultaneously activated and sustained in working memory, waiting to be serially retrieved. Buffering accounts for the comparative difficulty in recalling long words: more concurrently-activated nodes are easier to be confused, missed or jumbled than fewer nodes are. Notably, more frequent words are less likely to be confused than low-frequency words, since long-term entrenchment improves performance of immediate serial recall in working memory (Baddeley 1964; Gathercole et al. 1991). Serial (or syntagmatic) accounts of local ordering effects in word processing are often complemented by evidence of another, more global (or paradigmatic) dimension of word perception, based on the observation that, in the normal course of processing a word, other non-target neighbouring words become active. In the word recognition literature, there is substantial agreement on the inhibitory role of lexical neighbours (Goldinger et al. 1989; Luce & Pisoni 1998; Luce et al. 1990). Other things being equal, target words with a large number of neighbours take more time to be recognised and repeated, as they suffer from their neighbours’ competition in lexical buffering. This is particularly true when the target word is low-frequency. Nonetheless, there is contrasting evidence that dense neighbourhoods may speed up word reading time rather than delaying it (Huntsman & Lima 2002), and that high-entropy word families make their members more readily accessible than lowentropy families (Baayen et al. 2006). Marzi et al. (2014) provide clear computational evidence of interactive effects of paradigm regularity and type/token lexical frequency on the acquisition of German verb inflection. Token frequency plays a paramount role in item-based learning, with highly frequent words being acquired at comparatively earlier stages than lowfrequency words. Morphological regularity, on the other hand, has an impact on paradigm acquisition, regular paradigms being learned, on average, within a shorter time span than fully or partially irregular paradigms. Finally, frequency distribution of paradigmatically-related words significantly interacts with morphological regularity. Acquisition of regular paradigms depends less heavily on item-based storage and is thus less affected by differences in frequency distributions of paradigm members. Conversely, irregular paradigms are less prone to be generalised through information spreading and their acquisition mainly relies on itemised storage, thus being more strongly affected by the frequency distribution of paradigm members and by frequencybased competition, both intra- and interparadigmatically. We suggest that compounded evidence of wordlikeness and paradigm frequency effects can be accounted for within a unitary computational model of lexical memory. We provide here preliminary evidence in this direction, by looking at the way a specific, neuro-biologically inspired computational model of lexical memories, Temporal Self-Organising Maps (TSOMs), accounts for such effects.
|
Wordlikeness is a fundamental determinant of lexical organisation and access. Two quantitative measures of wordlikeness, namely n-gram probability and neighbourhood density, relate to important dimensions of lexical organisation: the syntagmatic (or horizontal) dimension, which controls the level of predictability and entrenchment of a serial memory trace, and the paradigmatic (or vertical) dimension, which controls the number of neighbours that are co-activated by the target word. The two dimensions are nicely captured by TSOMs, allowing the investigation of their dynamic interaction. In accessing and recalling a target word, a large pool of neighbours can be an advantage, since they tend to support patterns of activation that are shared by the target word. However, their help may turn out to interfere with recall, if the connection strength of one or more neighbours is overwhelmingly higher than that of the target. Deeply entrenched friends eventually become competitors. This dynamic establishes a nice connection with paradigm acquisition, where a uniform distribution of paradigm members is helpful in spreading morphotactic information and speed up acquisition, and paradigmatically- related forms in skewed distributions compete with one another (Marzi et al. 2014). We argue that both neighbourhood and morphological effects are the result of predictive (syntagmatic) activation and competitive (paradigmatic) co-activation of parallel processing nodes in densely interconnected networks. As a final qualification, our experiments illustrate the dynamic of activation and storage of letter strings, with no information about morphological content. They provide evidence of the first access stages of early lexical processing, where strategies of automatic segmentation are sensitive to possibly apparent morphological information (Post et al. 2008). Nonetheless, our data suggest that perception of wordlikeness and morphological structure can be accounted for by a common pool of principles governing the organisation of long-term memories for time series.
| 7
|
Lexical and Semantic Resources and Analysis
|
59_2014
| 2,014
|
Octavian Popescu, Ngoc Phuoc an Vo, Anna Feltracco, Elisabetta Jezek, Bernardo Magnini
|
Toward Disambiguating Typed Predicate-Argument Structures for Italian
|
ENG
| 5
| 2
| 0
|
Università di Pavia, Fondazione Bruno Kessler
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pavia, Trento
|
We report a word sense dis-ambiguation experiment on Italian verbs where both the sense inventory and the training data are derived from T-PAS, a lex-ical resource of typed predicate-argument structures grounded on corpora. We present a probabilistic model for sense dis-ambiguation that exploits the semantic fea-tures associated to each argument position of a verb.
|
Word Sense Disambiguation (WSD) (see (Agirre and Edmonds, 2006) for a comprehensive survey of the topic) is a task in Computational Linguistics where a system has to automatically select the cor-rect sense of a target word in context, given a list of possible senses for it. For instance, given the target word chair in the context of the sentence The cat is on the chair, and given two possible senses for the word, let’s call them chair as forniture and chair as human, a WSD system should be able to select the first sense as the appropriate one. An important aspect of WSD is that its complexity is affected by the ambiguity (i.e. the number of senses) of the words to be disambiguated. This has led in the past to discussing various characteristics of available sense repositories (e.g. WordNet, Fell-baum 1998), including the nature and the number of sense distinctions, particularly with respect to the application goals of WSD. In this paper we address Word Sense Disam-biguation of Italian verbs. Differently form pre-vious work on WSD for Italian (Bertagna et al. 2007), where the sense repository was ItalWordNet (Roventini et al. 2003), in our experiments we use verb senses derived from T-PAS, a repository of Typed Predicate Argument Structures for Italian acquired from corpora. There are two benefits of this choice: (i) word sense distinctions are now grounded on actual sense occurrences in corpora, this way ensuring a natural selection with respect of sense granularity; (ii) as in T-PAS for each verb sense a number of sentences are collected, there is no further need to annotate data for training and testing, avoiding the issue of re-interpreting sense distinctions by different people. The paper is organized as follows. Section 2 introduces T-PAS, including the methodology for its acquisition. Section 3 presents the probabilistic model that we have used for verb disambiguation and Section 4 reports on experimental results.
|
We have presented a word sense disambiguation system for Italian verbs, whose senses have been derived from T-PAS, a lexical resource that we have recently developed. This is the first work (we hope that many others will follow) attempting to use T-PAS for a NLP task. The WSD system takes advantage of the T-PAS structure, particularly the presence of semantic types for each verbal argument position. Results, although preliminary, show a very good precision. As for the future, we intend to consolidate the disambiguation methodology and we aim at a more detailed annotation of the sentence argument, corresponding to the internal structure of verb patterns. We plan to extend the analysis of the relationship between senses of the different positions in a pattern in order to implement a metrics based on tree and also to substitute the role of the parser with an independent pattern matching system. The probabilistic model presented in Section 3 can be extended in order to determine also the probability that a certain word is a syntactic head.
| 7
|
Lexical and Semantic Resources and Analysis
|
60_2014
| 2,014
|
Fabio Poroli, Massimiliano Todisco, Michele Cornacchia, Cristina Delogu, Andrea Paoloni, Mauro Falcone
|
Il corpus Speaky
|
ITA
| 6
| 1
| 0
|
Fondazione Ugo Bordoni
| 1
| 0
| 0
| 0
|
0
| 6
|
Fabio Poroli, Massimiliano Todisco, Michele Cornacchia, Cristina Delogu, Andrea Paoloni, Mauro Falcone
|
Italy
|
Rome
|
In this work we present a man-machine dialogue corpus acquired within the framework of the SpeakyAcutattile project with the Magic Technique of Oz. The corpus contains more than 60 hours of audio recording, inter-written, and video. It is dedicated in particular to the analysis of turn management and the resolution of errors. The system simulation with Oz's Magician was aimed at a voice dialogue without restrictions by the subject, both at the turn management level and at the phrase composition level.
|
In this work we present a man-machine dialog corpus acquired within the framework of the Speaky Acutattile project, a digital platform for domestication designed for the support of weak outness (older, non-visible, etc.).In this case, it is necessary to ensure that you have the right to use this product in order to ensure that you have the right to use this product in order to ensure that you have the right to use this product. The platform has been designed to provide users with a simplified tool for managing the household equipment and other multimedia devices present at home (TV, stereo, etc.).But also for online access to many public services, such as health care, online payments, bookings, purchase of travel tickets, etc. For the data collection was used the Magic of Oz technique (Fraser and Gilbert, 1991; Dahlback et al., 1993). The technique, although requires more attention and resources than other speaker selection strategies, is commonly placed among the most reliable systems for user-oriented voice interfaces prototyping and data collection on how to interact with users. Excepting some structural vices related to the experimental context (such as the subject’s minor involvement compared to the real user), the relevance of a machine dialog corpus collected with this method is determined by the definition of certain parameters that a priori determine the behavior of the Mago, in fact making it by the user as possible assimilable to a machine (machine-like). In this work was also applied a mixed initiative system simulation model (Allen et al., 2001) with graphics Éframeand- slotÓ (Bobrow et al., 1977), including the dialog behavior protocol. The Magic Technique of Oz therefore allowed to elaborate the grammatics of understanding of dialogue with certain variants, while verifying the subjects’ reactions to a system that appeared as real and did not force to interact pathways required for the resolution of tasks.
|
The Magic Technique of Oz allowed us to get a controlled corpus on some aspects of interaction that provide indications for the architecture of the dialogue system. The current data will be integrated with the acquisition of a new corpus in which the Oz Magician will be replaced by the prototype of the system, facing the same type of experimental user and the same use scenarios, in order to obtain data comparable to the current, both to improve the system performance, and to obtain valuable information on the Oz Magician technique. The database distribution policies will be defined at the end of the project (June 2015), and hopefully will be free for research activities, of course prior to NDA (Non Disclosure Agreement).
| 3
|
Chatbots and Dialogue Systems
|
61_2014
| 2,014
|
Manuela Sanguinetti, Cristina Bosco
|
Converting the parallel treebank ParTUT in Universal Stanford Dependencies
|
ENG
| 2
| 2
| 1
|
Università di Torino
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Turin
|
Assuming the increased need of language resources encoded with shared representation formats, the paper de-scribes a project for the conversion of the multilingual parallel treebank ParTUT in the de facto standard of the Stanford Dependencies (SD) representation. More specifically, it reports the conversion pro-cess, currently implemented as a proto-type, into the Universal SD format, more oriented to a cross-linguistic perspective and, therefore, more suitable for the pur-pose of our resource.
|
1 Introduction The increasing need to use language resources for the development and training of automatic systems goes hand in hand with the opportunity to make such resources available and accessible. This op-portunity, however, is often precluded by the use of different formats for encoding linguistic con-tent. Such differences may be dictated by sev-eral factors that, in the specific case of syntacti-cally annotated corpora, or treebanks, may include the choice of constituency vs dependency-based paradigm, the specific morphological and syntac-tical features of the language at issue, or the end use the resource has been designed for. This vari-ety of formats makes it more difficult the reuse of these resources in different contexts. In the case of parsing, and of treebanks, a few steps towards the spread of formats that could be easily shared by the community has led, also thanks to the efforts devoted to the organization of evaluation campaigns, to the use of what have then become de facto standards. This is the case, for example, of the Penn Treebank format for con-stituency paradigms (Mitchell et al., 1993). Within the framework of dependency-based rep-resentations, a new format has recently gained in-creasing success, i.e. that of the Stanford Typed Dependencies. The emergence of this format is attested by several projects on the conversion and harmonization of treebanks into this representa-tion format (Bosco et al., 2013; Haverinen et al., 2013; McDonald et al., 2013; Tsarfaty, 2013; Rosa et al., 2014). The project described in this paper is part of these ones and concerns in particular the conversion into the Stanford Dependencies of a multilingual parallel treebank for Italian, English and French called ParTUT. The next section will provide a brief description of ParTUT and its native format, along with that of the Universal Stanford Dependencies, while Section 3 will be devoted to the description of the conversion process, with some observations on its implications in the future development of ParTUT.
|
In this paper, we briefly described the ongoing project of conversion of a multilingual parallel treebank from its native representation format, i.e. TUT, into the Universal Stanford Dependencies. The main advantages of such attempt lie in the opportunity to release the parallel resource in a widely recognized annotation format that opens its usability to a number of NLP tasks, and in a resulting representation of parallel syntactic structures that are more uniform and, therefore, easier to put in correspondence. Conversion, however, is not a straightforward process, and a number of issues are yet to be tackled in order to obtain a converted version that is fully compliant with the target format. The next steps of this work will focus in particular on such issues.
| 4
|
Syntax and Dependency Treebanks
|
62_2014
| 2,014
|
Manuela Sanguinetti, Emilio Sulis, Viviana Patti, Giancarlo Ruffo, Leonardo Allisio, Valeria Mussa, Cristina Bosco
|
Developing corpora and tools for sentiment analysis: the experience of the University of Turin group
|
ENG
| 7
| 4
| 1
|
Università di Torino
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Turin
|
The paper describes the ongo-ing experience at the University of Turin in developing linguistic resources and tools for sentiment analysis of social media. We describe in particular the development of Senti-TUT, a human annotated cor-pus of Italian Tweets including labels for sentiment polarity and irony, which has been recently exploited within the SENTI-ment POLarity Classification shared task at Evalita 2014. Furthermore, we report about our ongoing work on the Felicittà web-based platform for estimating happi-ness in Italian cities, which provides vi-sualization techniques to interactively ex-plore the results of sentiment analysis per-formed over Italian geotagged Tweets.
|
1 Introduction Several efforts are currently devoted to automati-cally mining opinions and sentiments from natu-ral language, e.g. in social media posts, news and reviews about commercial products. This task en-tails a deep understanding of the explicit and im-plicit information conveyed by the language, and most of the approaches applied refer to annotated corpora and adequate tools for their analysis. In this paper, we will describe the experiences carried on at the Computer Science Department of the University of Turin in the development of cor-pora and tools for Sentiment Analysis and Opin-ion Mining (SA&OM) during the last few years. These experiences grew and are still growing in a scenario where an heterogeneous group of re-searchers featured by skills varying from compu-tational linguistics, sociology, visualization tech-niques, big data analysis and ontologies cooper-ates. Both the annotation applied in the devel-oped corpora and the tools for analyzing and dis-playing data analysis depend in fact on a cross-fertilization of different research areas and on the expertise gained by the group members in their re-spective research fields. The projects we will de-scribe are currently oriented to the investigation of aspects of data analysis that can be observed in such a particular perspective, e.g. figurative lan-guage or disagreement deep analysis, rather than to the achievement of high scores in the applica-tion of classifiers and statistical tools. The paper is organized as follows. The next section provides an overview on the annotated corpus Senti-TUT, which includes two main datasets: TW-NEWS (political domain) and TWFELICITTA (generic collection), while Section 3 describes the main uses of Senti-TUT and the Felicittà application context.
|
The paper describes the experiences done at the University of Turin on topics related to SA&OM, with a special focus on the main directions we are following. The first one is the development of an-notated corpora for Italian that can be exploited both in automatic systems’ training, in evaluation fora, and in investigating the nature of the data it-self, also by a detailed analysis of the disagree-ment occurring in the datasets. The second di-rection, which is exemplified by ongoing work on the Felicittà platform, consists in the devel-opment of applications of SA on social media in the social and behavioral sciences field, where SA techniques can contribute to interpret the degree of well-being of a country (Mitchell et al., 2013; Quercia et al., 2012), with a special focus on displaying the results generated by the analysis in a graphic form that can be easily readable also for non-expert users.
| 6
|
Sentiment, Emotion, Irony, Hate
|
63_2014
| 2,014
|
Enrico Santus, Qin Lu, Alessandro Lenci, Chu-Ren Huang
|
Unsupervised Antonym-Synonym Discrimination in Vector Space
|
ENG
| 4
| 1
| 0
|
The Hong Kong Polytechnic University Hong Kong, Università di Pisa
| 2
| 1
| 0
| 3
|
Enrico Santus, Qin Lu, Chu-Ren Huang
| 0
|
0
|
China, Italy
|
Hong Kong, Pisa
|
Automatic detection of antonymy is an important task in Natural Language Processing (NLP). However, currently, there is no effective measure to discriminate antonyms from synonyms because they share many common features. In this paper, we introduce APAnt, a new Average-Precision-based measure for the unsupervised identification of antonymy using Distributional Semantic Models (DSMs). APAnt makes use of Average Precision to estimate the extent and salience of the intersection among the most descriptive contexts of two target words. Evaluation shows that the proposed method is able to distinguish antonyms and synonyms with high accuracy, outperforming a baseline model implementing the co-occurrence hypothesis.
|
Antonymy is one of the fundamental relations shaping the organization of the semantic lexicon and its identification is very challenging for computational models (Mohammad et al., 2008). Yet, antonymy is essential for many Natural Language Processing (NLP) applications, such as Machine Translation (MT), Sentiment Analysis (SA) and Information Retrieval (IR) (Roth and Schulte im Walde, 2014; Mohammad et al., 2013). As well as for other semantic relations, computational lexicons and thesauri explicitly encoding antonymy already exist. Although such resources are often used to support the above mentioned NLP tasks, they have low coverage and many scholars have shown their limits: Mohammad et al. (2013), for example, have noticed that “more than 90% of the contrasting pairs in GRE closest-to-opposite questions are not listed as opposites in WordNet”. The automatic identification of semantic relations is a core task in computational semantics. Distributional Semantic Models (DSMs) have often been used for their well known ability to identify semantically similar lexemes using corpus-derived co-occurrences encoded as distributional vectors (Santus et al., 2014a; Baroni and Lenci, 2010; Turney and Pantel, 2010; Padó and Lapata, 2007; Sahlgren, 2006). These models are based on the Distributional Hypothesis (Harris, 1954) and represent lexical semantic similarity in function of distributional similarity, which can be measured by vector cosine (Turney and Pantel, 2010). However, these models are characterized by a major shortcoming. That is, they are not able to discriminate among different kinds of semantic relations linking distributionally similar lexemes. For instance, the nearest neighbors of castle in the vector space typically include hypernyms like building, co-hyponyms like house, meronyms like brick, antonyms like shack, together with other semantically related words. While impressive results have been achieved in the automatic identification of synonymy (Baroni and Lenci, 2010; Padó and Lapata, 2007), methods for the identification of hypernymy (Santus et al., 2014a; Lenci and Benotto, 2012) and antonymy (Roth and Schulte im Walde, 2014; Mohammad et al., 2013) still need much work to achieve satisfying precision and coverage (Turney, 2008; Mohammad et al., 2008). This is the reason why semisupervised pattern-based approaches have often been preferred to purely unsupervised DSMs (Pantel and Pennacchiotti, 2006; Hearst, 1992) In this paper, we introduce a new Average- Precision-based distributional measures that is able to successfully discriminate antonyms from synonyms, outperforming a baseline implementing the co-occurrence hypothesis, formulated by Charles and Miller in 1989 and confirmed in other studies, such as those of Justeson and Katz (1991) and Fellbaum (1995).
|
This paper introduces APAnt, a new distributional measure for the identification of antonymy (an extended version of this paper will appear in Santus et al., 2014b). APAnt is evaluated in a discrimination task in which both antonymy- and synonymy-related pairs are present. In the task, APAnt has outperformed the baseline implementing the co-occurrence hypothesis (Fellbaum, 1995; Justeson and Katz, 1991; Charles and Miller, 1989) by 17%. APAnt performance supports our hypothesis, according to which synonyms share a number of salient contexts that is significantly higher than the one shared by antonyms. Ongoing research includes the application of APAnt to discriminate antonymy also from other semantic relations and to automatically extract antonymy-related pairs for the population of ontologies and lexical resources. Further work can be conducted to apply APAnt to other languages.
| 22
|
Distributional Semantics
|
64_2014
| 2,014
|
Eva Sassolini, Sebastiana Cucurullo, Manuela Sassi
|
Methods of textual archives preservation
|
ENG
| 3
| 3
| 1
|
CNR-ILC
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
Over its fifty-years of history the Institute for Computational Linguis- tics “Antonio Zampolli” (ILC) has stored a great many texts and corpora in various formats and record layouts. The consoli- dated experience in the acquisition, man- agement and analysis of texts has al- lowed us to formulate a plan of recovery and long-term digital preservation of such texts. In this paper, we describe our approach and a specific case study in which we show the results of a real at- tempt of text recovery. The most im- portant effort for us has been the study and comprehension of more or less com- plex specific data formats, almost always tied to an obsolete technology.
|
The international scientific communities consider electronic resources as a central part of cultural and intellectual heritage. Many institutions are involved in international initiatives 1 directed to the preservation of digital materials. The Digital Preservation Europe project (DPE) is an example of a collaboration which involved many scientific communities at the international level, aimed at achieving the long-term preservation of resources. "Digital preservation is the active manage-ment of digital content over time to ensure ongo-ing access. Digital preservation may be applied to any form of information that is stored in a digital format. Digital preservation is necessary to ensure that the increasing quantity of digital information, created and stored by individuals and institutions, is available for current and future use. As digi-tal information is created in a variety of formats, and stored on a variety of media, it is often vulner-able to damage or loss due to environmental fac-tors, physical deterioration, or obsolescence of storage media and software applications. Digital materials may also be vulnerable to damage or loss due to malicious attack, accidental or intentional destruction, or human error. Digital preservation is an important part of a comprehensive approach to information management, and it is essential to pre-serve the authenticity, integrity, and accessibility of digital information over time. Digital preservation requires the development and implementation of policies, procedures, and technologies to ensure the long-term management of digital content. Dig-ital preservation is a complex and evolving field, and it is important to stay informed about the latest developments in order to make informed deci-sions about the management of digital content." (Verheul 2006: 11-12) "Digital preservation combines policies, strate-gies and actions that ensure access to digital con-tent over time. Digital preservation is broadly con-cerned with the discovery, acquisition, selection, appraisal, access, and long-term management of digital materials, regardless of format. Long-term management is an essential part of digital preser-vation and involves addressing issues such as changing technology, the fragility of digital media, and the need to maintain the authenticity and in-tegrity of digital information in the face of storage media failure and technological change. The goal of digital preservation is the accurate rendering of authenticated content over time." (ALA 2007:2) In our specific case we are engaged in looking for systems and techniques necessary for the long-term management of digital textual materi-als, stored at ILC over many years of work. At the beginning we did not know the age of all the materials. However, at a later stage, we found a number of digital materials, some of which dated as far back as the 70’s. For this reason, the work of recovery is extremely complex and demand-ing. Firstly, the format of file/text encoding was often obsolete and not associated with exhaustive documentation. Secondly, many texts contained disused linguistic annotation schemas. Conse-quently, we have adopted different strategies for “textual resource” retrieval, before focusing our attention on the application of standardized measures for conservation of the texts.
|
The preservation of that data produced with out-dated technologies should be handled especially by public institutions, as this is part of the histor-ical heritage. Therefore, it is necessary for us to complete this work, so that the resources can be reused. This will be possible only through a joint effort of the institutions involved at the regional, national and international levels. ILC is current-ly establishing a number of co-operation agree-ments like the one with the “Accademia della Crusca”, in an attempt to gather data resources for maintenance, preservation and re-use by third parties.
| 7
|
Lexical and Semantic Resources and Analysis
|
65_2014
| 2,014
|
Asad Sayeed, Vera Demberg
|
Combining unsupervised syntactic and semantic models of thematic fit
|
ENG
| 2
| 1
| 0
|
Saarland University
| 1
| 1
| 1
| 2
|
Asad Sayeed, Vera Demberg
| 0
|
0
|
Germany
|
Saarbrücken
|
We explore the use of the SENNA semantic role-labeller to define a distributional space to build a fully unsupervised model of event-entity thematic fit judgements. Existing models use syntactic dependencies for this. Our Distributional Memory model outperforms a syntax-based model by a wide margin, matches an augmented model that uses hand-crafted rules, and provides results that can be easily combined with the augmented model, improving matching over multiple thematic fit judgement tasks.
|
It is perfectly conceivable that automated tasks in natural language semantics can be accomplished entirely through models that do not require the contribution of semantic features to work at high accuracy. Unsupervised semantic role labellers such as that of Titov and Klementiev (2011) and Lang and Lapata (2011) do exactly this: predict semantic roles strictly from syntactic realizations. In other words, for practical purposes, the relevant and frequent semantic cases might be completely covered by learned syntactic information. For example, given a sentence The newspaper was put on the table, such SRL systems would identify that the table should receive a “location” role purely from the syntactic dependencies centered around the preposition on. We could extend this thinking to a slightly different task: thematic fit modelling. It could well be the case that the the table could be judged a more appropriate filler of a location role for put than, e.g., the perceptiveness, entirely due to information about the frequency of word collocations and syntactic dependencies collected through corpus data, handmade grammars, and so on. In fact, today’s distributional models used for modelling of selectional preference or thematic fit generally base their estimates on syntactic or string co-occurrence models (Baroni and Lenci, 2010; Ritter et al., 2010; S´eaghdha, 2010). The Distributional Memory (DM) model by Baroni and Lenci (2010) is one example of an unsupervised model based on syntactic dependencies, which has been successfully applied to many different distributional similarity tasks, and also has been used in compositional models (Lenci, 2011). While earlier work has shown that syntactic relations and thematic roles are related concepts (Levin, 1993), there are also a large number of cases where thematic roles assigned by a role labeller and their best-matching syntactic relations do not correspond (Palmer et al., 2005). However, it is possible that this non-correspondence is not a problem for estimating typical agents and patients from large amounts of data: agents will most of the time coincide with subjects, and patients will most of the time coincide with syntactic objects. On the other hand, the best resource for estimating thematic fit should be based on labels that most closely correspond to the target task, i.e. semantic role labelling, instead of syntactic parsing. In this paper, we want to test how far a DM trained directly on a role labeller which produces PropBank style semantic annotations can complement the syntax-based DM model on thematic fit tasks, given a similar corpus of training data. We maintain the unsupervised nature of both models by combining their ratings by averaging without any weight estimation (we “guess” 50%) and show that we get an improvement in matching human judgements collected from previous experiments on agent/patient roles, location, and manner roles. We demonstrate that a fully unsupervised model based on a the SENNA role-labeller (Collobert et al., 2011) outperforms a corresponding model based on MaltParser dependencies (DepDM) by a wide margin. Furthermore, we show that the SENNA-based model can almost match B&L’s better performing TypeDM model, which involves hand-crafted rules, and demonstrate that the SENNA-based model makes a contribution over and above the syntactic model in a range of thematic role labelling tasks.
|
We have constructed a distributional memory based on SENNA-annotated thematic roles and shown an improved correlation with human data when combining it with the high-performing syntax-based TypeDM. We found that, even when built on similar corpora, SRL brings something to the table over and above syntactic parsing. In addition, our SENNA-based DM model was con-structed in a manner roughly equivalent to B&L’s simpler DepDM model, and yet it performs at a level far higher than DepDM on the Padó data set, on its own approaching the performance of TypeDM. It is likely that an SRL-based equivalent to TypeDM would further improve performance, and is thus a possible path for future work. Our work also contributes the first evaluation of structured distributional models of semantics for thematic role plausibility for roles other than agent and patient.
| 22
|
Distributional Semantics
|
66_2014
| 2,014
|
Romain Serizel, Diego Giuliani
|
Deep neural network adaptation for children's and adults' speech recognition
|
ENG
| 2
| 0
| 0
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
This paper introduces a novel application of the hybrid deep neural net-work (DNN) - hidden Markov model (HMM) approach for automatic speech recognition (ASR) to target groups of speakers of a specific age/gender. We target three speaker groups consisting of children, adult males and adult females, respectively. The group-specific training of DNN is investigated and shown to be not always effective when the amount of training data is limited. To overcome this problem, the recent approach that con-sists in adapting a general DNN to do-main/language specific data is extended to target age/gender groups in the context of hybrid DNN-HMM systems, reducing consistently the phone error rate by 15-20% relative for the three different speaker groups.
|
Speaker-related acoustic variability is one of the major source of errors in automatic speech recognition. In this paper we cope with age group differences, by considering the relevant case of children versus adults, as well as with male/female differences. Here DNN is used to deal with the acoustic variability induced by age and gender differences. When an ASR system trained on adults’ speech is employed to recognise children’s speech, performance decreases drastically, especially for younger children (Wilpon and Jacobsen, 1996; Das et al., 1998; Claes et al., 1998; Potamianos and Narayanan, 2003; Giuliani and Gerosa, 2003; Gerosa et al., 2007). A number of attempts have been reported in literature to contrast this effect. Most of them try to compensate for spectral differences caused by differences in vocal tract length and shape by warping the frequency axis of the speech power spectrum of each test speaker or transforming acoustic models (Potamianos and Narayanan, 2003; Das et al., 1998; Claes et al., 1998). However, to ensure good recognition performance, age-specific acoustic models trained on speech collected from children of the target age, or group of ages, is usually employed (Wilpon and Jacobsen, 1996; Hagen et al., 2003; Nisimura et al., 2004; Gerosa et al., 2007). Typically, much less training data are available for children than for adults. The use of adults’ speech for reinforcing the training data in the case of a lack of children’s speech was investigated in the past (Wilpon and Jacobsen, 1996; Steidl et al., 2003). However, in order to achieve a recognition performance improvement when training with a mixture of children’s and adults’ speech, speaker normalisation and speaker adaptive training techniques are usually needed (Gerosa et al., 2009). During the past years, DNN has proven to be an effective alternative to HMM - Gaussian mixture modelisation (GMM) based ASR (HMMGMM) (Bourlard and Morgan, 1994; Hinton et al., 2012) obtaining good performance with context dependent hybrid DNN-HMM (Mohamed et al., 2012; Dahl et al., 2012). Capitalising on their good classification and generalisation skills the DNN have been used widely in multi-domain and multi-languages tasks (Sivadas and Hermansky, 2004; Stolcke et al., 2006). The main idea is usually to first exploit a task independent (multi-lingual/multi-domain) corpus and then to use a task specific corpus. One approach consists in using the different corpora at different stages of the DNN training. The task independent corpus is used only for the pretraining (Swietojanski et al., 2012) or for a general first training (Le et al., 2010; Thomas et al., 2013) and the task specific corpus is used for the final training/adaptation of the DNN. This paper introduces the use of the DNNHMM approach for phone recognition in age and gender dependent groups, extending the idea introduced in (Yochai and Morgan, 1992) to the DNN context. Three target groups of speakers are considered here, that is children, adult males and adult females. There is only a limited amount of labeled data for such groups. To overcome this problem, a DNN trained on speech data from all the three groups of speakers is adapted to the age/gender group specific corpora. First it is shown that training a DNN only from a group specific corpus is not effective when only limited labeled data is available. Then the method proposed in (Thomas et al., 2013) is adapted to the age/gender specific problem. The rest of this paper is organized as follows. Section 2 introduces the general training and adaptation methods. Experimental setup is described in Section 3 and results are presented in Section 4. Finally, conclusions are provided in Section 5.
|
In this paper we have investigated the use of the DNN-HMM approach in a phone recognition task targeting three groups of speakers, that is children, adult males and adult females. It has been shown that, in under-resourced condition, group specific training does not necessarily lead to PER improve-ments. To overcome this problem a recent ap-proach, which consists in adapting a task indepen-dent DNN for tandem ASR to domain/language specific data, has been extended to age/gender spe-cific DNN adaptation for DNN-HMM. The DNN-HMM adapted on a low amount of group specific data have been shown to improve the PER by 15-20% relative with respect to the DNN-HMM base-line system trained on speech data from all the three groups of speakers. In this work we have proven the effectiveness of the hybrid DNN-HMM approach when training with limited amount of data and targeting speaker populations of different age/gender. Future work will be devoted to embed the results presented here in a large vocabulary speech recogniser especially targeting under-resourced groups of speakers such as children.
| 13
|
Multimodal
|
67_2014
| 2,014
|
Antonio Sorgente, Giuseppe Vettigli, Francesco Mele
|
An Italian Corpus for Aspect Based Sentiment Analysis of Movie Reviews
|
ENG
| 3
| 0
| 0
|
CNR-ICIB
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Naples
|
In this paper we will present an Italian corpus focused on the domain of movie reviews, developed in order to sup-port our ongoing research for the devel-opment of new models about Sentiment Analysis and Aspect Identification in Ital-ian language. The corpus that we will present contains a set of sentences man-ually annotated according to the various aspects of the movie that have been dis-cussed in the sentence and the polarity expressed towards that particular aspect. In this paper we will present the anno-tation guidelines applied, some statistics about the corpus and the preliminary re-sults about the identification of the aspects.
|
Nowadays, on the Web there is a huge amount of unstructured information about public opinion and it continues growing up rapidly. Analysing the opinions expressed by the users is an important step to evaluate the quality of a product. In this scenario, the tools provided by Sentiment Analy-sis and Opinion Mining are crucial to process this information. In the particular case of movie re-views, we have that the number of reviews that a movie receives on-line grows quickly. Some pop-ular movies can receive hundreds of reviews and, furthermore, many reviews are long and some-times they contain only few sentences expressing the actual opinions. This makes hard for a poten-tial viewer to read them and make an informed decision about whether to watch a movie or not. In the case that one only reads a few reviews, the choice may be biased. The large number of reviews also makes it hard for movie producers to keep track of viewer’s opinions. The recent advances in Sentiment Analysis have shown that coarse overall sentiment scores fails to adequately represent the multiple potential aspects on which an entity can be evaluated (Socher et al., 2013). For example, if we consider the following review from Amazon.com about the movie Inception: “By far one of the best movies I’ve ever seen. Visually stunning and mentally challenging. I would recommend this movie to people who are very deep and can stick with a movie to get the true meaning of the story.” One can see that, even if the review is short, it not only expresses an overall opinion but also contains opinions about other two aspects of the movie: the photography and the story. So, in order to obtain a more detailed sentiment, an analysis that considers different aspects is required. In this work, we present an Italian corpus fo-cused on the domain of movie reviews developed in order to support our ongoing effort for the de-velopment of new models about Sentiment Analy-sis and Aspect Identification in Italian language. The paper is structured as follows. In the Sec-tion 2 we present the motivations that led us to the creation of a new corpus and a short survey about related resources that already exist. Section 3 de-scribes the guideline used to annotate the corpora, while Section 4 presents some statistical informa-tion about it. In section 5 we present some prelimi-nary experiments about the identification of the as-pects. Finally, in section 6 some conclusions will be offered.
|
We introduced an Italian corpus of sentences ex-tracted by movie reviews. The corpus has been specifically designed to support the development of new tools for the Sentiment Analysis in Italian. We believe that corpus can be used to train and test new models for sentence-level sentiment clas-sification and aspect-level opinion extractions. In the paper, various aspects of the corpus we created have been described. Also, the results of some preliminary experiments about the automatic identification of the aspects have been showed.
| 6
|
Sentiment, Emotion, Irony, Hate
|
68_2014
| 2,014
|
Stefania Spina
|
Il Perugia Corpus: una risorsa di riferimento per l'italiano. Composizione, annotazione e valutazione
|
ITA
| 1
| 1
| 1
|
Università per Stranieri di Perugia
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Perugia
|
The Perugia Corpus (PEC) is a corpus of contemporary Italian written and spoken, which comprises over 26 million words. The objective that led its constitution was to reveal the lack of a corpus of Italian reference. This article describes the criteria on the basis of its com-position, its structure in 10 sections and sub-sections and its multi-level notification, with its respective assessment.
|
The Perugia Corpus (PEC) is a reference corpus of contemporary Italian, written and spoken1; it consists of over 26 million words, distributed in 10 different sections, corresponding to the same text genres, and equipped with a multi-level annotation. The PEC intends to reveal the lack of a reference corpus (written and spoken), which has so far suffered by Italian studies. Because of its nature as a reference resource (EAGLES, 1996), the PEC is designed to provide as general language information as possible about Italian and its main written and spoken varieties. The philosophy that led the composition of the PEC is therefore radically different from that which is at the basis of some web corpora (Baroni and Bernardini, 2006; Kilgarriff and Grefenstette, 2003) of the Italian of last generation such as Paisà (Lyding et al., 2014), itWac (Baroni and Kilgarriff, 2006) or itTenTen (Jakub' ek et al., 2013), but also from that of the less recent corpora such as Republic (Baroni et al., 2004) and CORIS/CODIS (Rossini Favretti et al., 2002): the choice was in fact that the differentiation of the generials of the text, including the spoken, to the extent of the dimensions of the corpus. In addition, it focused on re-use of existing and available resources (Zampolli, 1991), but sometimes dispersed and difficult to consult; to them were added new data, gathered with the double purpose of filling emptiness in which data was not available for the Italian, and updating existing, but already dated resources. The PEC can therefore be considered a reference corpus Élow costÓ, of contained dimensions but with a good representativity of the various written and spoken varieties of Italian. The dimensions contained in the PEC also have two advantages: they allow to manage, in the process of questioning, more managerial results (Hundt and Leech, 2012), and allow to obtain good accuracy in the announcement (see par. 3.2 and 2.
|
The PEC represents the first reference corpus of contemporary Italian written and spoken; in its composition was privileged the differentiation of text genres, also spoken, compared to the width of dimensions. Realized with limited resources and in short periods of time, using, where possible, existing language resources, the PEC constitutes a low-cost compromise between the creation of new resources and the re-use of existing resources. The PEC interrogation takes place through the CWB interface and the Corpus Query Processor (Evert and Hardie, 2011), which allows you to search for words, sequences of words and notes; it is planned the creation of a network interface via CQPweb (Hardie, 2012), accessible to the public.
| 7
|
Lexical and Semantic Resources and Analysis
|
69_2014
| 2,014
|
Fabio Tamburini
|
Are Quantum Classifiers Promising?
|
ENG
| 1
| 0
| 0
|
Università di Bologna
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Bologna
|
This paper presents work in progress on the development of a new gen-eral purpose classifier based on Quantum Probability Theory. We will propose a kernel-based formulation of this classifier that is able to compete with a state-of-the-art machine learning methods when clas-sifying instances from two hard artificial problems and two real tasks taken from the speech processing domain.
|
Quantum Mechanics Theory (QMT) is one of the most successful theory in modern science. Despite its ability to properly describe most natural phenomena in the physics realm, the attempts to prove its effectiveness in other domains remain quite limited. Only in recent years some scholars tried to embody principles derived from QMT into their specific fields. This connection has been actively studied, for example, by the Information Retrieval community (Zuccon et al., 2009; Melucci, van Rijsbergen, 2011; Gonz`alez, Caicedo, 2011) and in the domain of cognitive sciences and decision making (Busemeyer, Bruza, 2012). Also the NLP community started to look at QMT with interest and some studies using it have already been presented (Blacoe et al., 2013; Liu et al., 2013). This paper presents work in progress on the development of a new classifier based on Quantum Probability Theory. Starting from the work presented in (Liu et al., 2013) we will show all the limits of this simple quantum classifier and propose a new kernel-based formulation able to solve most of its problems and able to compete with a state-of-the-art classifier, namely Support Vector Machines, when classifying instances from two hard artificial problems and two real tasks taken from speech processing domain.
|
This paper presented a first attempt to produce a general purpose classifier based on Quantum Probability Theory. Considering the early experiments from (Liu et al., 2013), KQC is more powerful and gains better performance. The results obtained on our experiments are quite encouraging and we are tempted to answer ‘yes’ to the question presented in the paper title. This is a work in progress and the KQC is not free from problems. Despite its potential to outperform SVM using linear kernels, it is very complex to determine a tradeoff between the definition of decision boundaries with maximum margins and to maximise the classifier generalisation abilities. A long optimisation process on the training set maximise the margins between classes but could potentially lead to poor generalisations on new data. Making more experiments and evaluations in that directions is one of our future plans.
| 13
|
Multimodal
|
70_2014
| 2,014
|
Francesco Tarasconi, Vittorio Di Tomaso
|
Geometric and statistical analysis of emotions and topics in corpora
|
ENG
| 2
| 0
| 0
|
CELI Language Technology
| 1
| 0
| 0
| 0
|
0
| 2
|
Francesco Tarasconi, Vittorio Di Tomaso
|
Italy
|
Turin
|
NLP techniques can enrich un-structured textual data, detecting topics of interest and emotions. The task of under-standing emotional similarities between different topics is crucial, for example, in analyzing the Social TV landscape. A measure of how much two audiences share the same feelings is required, but also a sound and compact representation of these similarities. After evaluating differ-ent multivariate approaches, we achieved these goals by adapting Multiple Corre-spondence Analysis (MCA) techniques to our data. In this paper we provide back-ground information and methodological reasons to our choice. We also provide an example of Social TV analysis, performed on Twitter data collected between October 2013 and February 2014.
|
Classification of documents based on topics of interest is a popular NLP research area; see, for example, Hamamoto et al. (2005). Another important subject, especially in the context of Web 2.0 and social media, is the sentiment analysis, mainly meant to detect polarities of expressions and opinions (Liu, 2012). A sentiment analysis task which has seen less contributions, but of growing popularity, is the study of emotions (Wiebe et al., 2005), which requires introducing and analyzing multiple variables (appropriate ”emotional dimensions”) potentially correlated. This is especially important in the study of the so-called Social TV (Cosenza, 2012): people can share their TV experience with other viewers on social media using smartphones and tablets. We define the empirical distribution of different emotions among viewers of a specific TV show as its emotional profile. Comparing at the same time the emotional profiles of several formats requires appropriate descriptive statistical techniques. During the research we conducted, we evaluated and selected geometrical methods that satisfy these requirements and provide an easy to understand and coherent representation of the results. The methods we used can be applied to any dataset of documents classified based on topics and emotions; they also represent a potential tool for the quantitative analysis of any NLP annotated data. We used the Blogmeter platform1 to download and process textual contents from social networks (Bolioli et al., 2013). Topics correspond to TV programs discussed on Twitter. Nine emotions are detected: the basic six according to Ekman (Ekman, 1972) (anger, disgust, fear, joy, sadness, surprise), love (a primary one in Parrot’s classification) and like/dislike expressions, quite common on Twitter.
|
By applying carefully chosen multivariate statistical techniques, we have shown how to represent and highlight important emotional relations between topics. Further results in the MCA field can be experimented on datasets similar to the ones we used. For example, additional information about opinion polarity and document authors (such as Twitter users) could be incorporated in the analysis. The geometric approach to MCA (Le Roux and Rouanet, 2004) could be interesting to study in greater detail the clouds of impressions and documents (J and D matrices); authors could also be considered as mean points of well-defined subclouds.
| 6
|
Sentiment, Emotion, Irony, Hate
|
71_2014
| 2,014
|
Mirko Tavosanis
|
Il Corpus ICoN: una raccolta di elaborati di italiano L2 prodotti in ambito universitario
|
ITA
| 1
| 0
| 0
|
Università di Pisa
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Pisa
|
The contribution presents the essential characteristics of the Corpus ICoN. The corpus collects processes carried out over 13 years by university students; the processes are divided into two subcorps. The equivalents are dedicated respectively to students who know Italian as L1 and those who know it as L2/LS.
|
Text bodies made like L2 have long been an essential tool for the study of language learning. In the case of the Italian, however, although there are important and well-made products, the body number is still considered insufficient for many types of research (for a overview: Andorno and Rastelli 2009). The corpus in equipment described below aims to contribute in this sense. The work is placed within the activities of the PRIN Escripture breviÓ project and the final product is expected to be used first by the Institute of Computing Linguistics of the CNR of Pisa for the development of automatic evaluation tools of the learning process. The work is currently ongoing. The completion of the activities is scheduled for the end of 2015, but the overall characteristics of the corpus are already well defined and thus make an articulated presentation possible.
|
The processing of the corpus is still ongoing. However, the trials carried out so far are very promising and make sure the project is useful. A particular value seems the possibility of comparing the texts produced by students who have or not the Italian as L1 in circumstances where communication purposes respond to a precise teaching reality.
| 8
|
Learner Corpora and Language Acquisition
|
72_2014
| 2,014
|
Olga Uryupina, Alessandro Moschitti
|
Coreference Resolution for Italian: Assessing the Impact of Linguistic Components
|
ENG
| 2
| 1
| 1
|
Università di Trento, Qatar Computing Research Institute
| 2
| 1
| 0
| 1
|
Alessandro Moschitti
| 0
|
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
This paper presents a systematic evaluation of two linguistic components required to build a coreference resolution system: mention detection and mention description. We compare gold standard annotations against the output of the mod-ules based on the state-of-the-art NLP for Italian. Our experiments suggest the most promising direction for future work on coreference in Italian: we show that, while automatic mention description affects the performance only mildly, the mention de-tection module plays a crucial role for the end-to-end coreference performance. We also show that, while a considerable number of mentions in Italian are zero pronouns, their omission doesn’t affect a general-purpose coreference resolver, sug-gesting that more specialized algorithms are needed for this subtask.
|
Coreference Resolution is an important prerequisite for a variety of Natural Language Processing tasks, in particular, for Information Extraction and Question Answering, Machine Translation or Single-document Summarization. It is, however, a challenging task, involving complex inference over heterogeneous linguistic cues. Several highperformance coreference resolvers have been proposed recently in the context of the CoNLL-2011 and CoNLL-2012 shared tasks (Pradhan et al., 2011; Pradhan et al., 2012). These systems, however, are engineered to process English documents and cannot be directly applied to other languages: while the CoNLL-2012 shared task includes Arabic and Chinese datasets, most participants have not investigated any language-specific approaches and have relied on the same universal algorithm, retraining it for particular corpora. To our knowledge, only very few systems have been proposed so far to provide end-to-end coreference resolution in Italian. In the context of the SemEval-2010 shared task (Recasens et al., 2010), four systems have attempted Italian coreference. Among these toolkits, only BART relied on any language-specific solutions at this stage (Broscheit et al., 2010; Poesio et al., 2010). The TANL system, however, was enhanced with languagespecific information and integrated into the University of Pisa Italian pipeline later on (Attardi et al., 2012). At Evalita 2009 and 2011, different variants of coreference resolution were proposed as shared tasks (Lenzi and Sprugnoli, 2009; Uryupina and Poesio, 2012), in both cases, only one participant managed to submit the final run. One of the bottlenecks in creating highperformance coreference resolvers lies in the complexity of their architecture. Coreference is a deep linguistic phenomenon and state-of-the-art systems incorporate multiple modules for various related subtasks. Even creating a baseline endto- end resolver is therefore a difficult engineering task. Going beyond the baseline is even more challenging, since it is generally unclear how different types of errors might affect the overall performance level. This paper focuses on systematic evaluation of different sub-modules of a coreference resolver to provide a better understanding of their impact on the system’s performance and thus suggest more promising venues for future research. Starting with a gold pipeline, we gradually replace its components with automatic modules, assessing the impact. The ultimate goal of our study is to boost the performance level for Italian. We are focusing on improving the languagespecific representation, leaving aside any comparison between coreference models (for example, mention-pair vs. mention-entity vs. graph-based).
|
In this paper, we have attempted an extensive evaluation of the impact of two language-specific components on the performance of a coreference resolver for Italian. We show that the mention extraction module plays a crucial role, while the contribution of the mention description model, while still important, is much less pronounced. This suggests that the mention extraction subtask should be in the primary focus at the beginning of the language-specific research on coreference. Our future work in this direction includes developing a robust statistical mention detector for Italian based on parse trees. We also show that zero pronouns can not be handled by a general-purpose coreference resolver and should therefore be addressed by a separate system, combining their extraction and resolution. Finally, our study has not addressed the last language-specific component of the coreference pipeline, the feature extraction module. Its performance cannot be assessed via a comparison with an oracle since there are no perfect gold features. In the future,we plan to evaluate the impact of this component by comparing different feature sets, engineered both manually and automatically.
| 7
|
Lexical and Semantic Resources and Analysis
|
73_2014
| 2,014
|
Andrea Vanzo, Giuseppe Castellucci, Danilo Croce, Roberto Basili
|
A context based model for Twitter Sentiment Analysis in Italian
|
ENG
| 4
| 0
| 0
|
Università di Roma Tor Vergata
| 3
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome
|
Recent works on Sentiment Analysis over Twitter leverage the idea that the sentiment depends on a single incoming tweet. However, tweets are plunged into streams of posts, thus making available a wider context. The contribu-tion of this information has been recently investigated for the English language by modeling the polarity detection as a se-quential classification task over streams of tweets (Vanzo et al., 2014). Here, we want to verify the applicability of this method even for a morphological richer language, i.e. Italian.
|
Web 2.0 and Social Networks allow users to write about their life and personal experiences. This huge amount of data is crucial in the study of the interactions and dynamics of subjectivity on the Web. Sentiment Analysis (SA) is the computational study and automatic recognition of opinions and sentiments. Twitter is a microblogging service that counts about a billion of active users. In Twitter, SA is traditionally treated as any other text classification task, as proved by most systems participating to the Sentiment Analysis in Twitter task in SemEval-2013 (Nakov et al., 2013). A Machine Learning (ML) setting allows to induce detection functions from real world labeled examples. However, the shortness of the message and the resulting semantic ambiguity represent a critical limitation, thus making the task very challenging. Let us consider the following message between two users: Benji: @Holly sono completamente d’accordo con te The tweet sounds like to be a reply to the previous one. Notice how no lexical or syntactic property allows to determine the polarity. Let’s look now at the entire conversation: Benji : @Holly con un #RigoreAl90 vinci facile!! Holly : @Benji Lui vince sempre per`o :) accanto a chiunque.. Nessuno regge il confronto! Benji : @Holly sono completamente d’accordo con te The first is clearly a positive tweet, followed by a positive one that makes the third positive as well. Thus, through the conversation we can disambiguate even a very short message. We want to leverage on this to define a context-sensitive SA model for the Italian language, in line with (Vanzo et al., 2014). The polarity detection of a tweet is modeled as a sequential classification task through the SVMhmm learning algorithm (Altun et al., 2003), as it allows to classify an instance (i.e. a tweet) within an entire sequence. First experimental evaluations confirm the effectiveness of the proposed sequential tagging approach combined with the adopted contextual information even in the Italian language. A survey of the existing approaches is presented in Section 2. Then, Section 3 provides an account of the context-based model. The experimental evaluation is presented in Section 4.
|
In this work, the role of contextual information in supervised Sentiment Analysis over Twitter is in-vestigated for the Italian language. Experimental results confirm the empirical findings presented in (Vanzo et al., 2014) for the English language. Al-though the size of the involved dataset is still lim-ited, i.e. about 1,400 tweets, the importance of contextual information is emphasized within the considered markovian approach: it is able to take advantage of the dependencies that exist between different tweets in a conversation. The approach is also largely applicable as all experiments have been carried out without the use of any manual coded resource, but mainly exploiting unannotated material within the distributional method. A larger experiment, eventually on an oversized dataset, such as SentiTUT4, will be carried out.
| 6
|
Sentiment, Emotion, Irony, Hate
|
74_2014
| 2,014
|
Rossella Varvara, Elisabetta Jezek
|
Semantic role annotation of instrument subjects
|
ENG
| 2
| 2
| 1
|
Università di Trento, Università di Pavia
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento, Pavia
|
Semantic role annotation has be-come widely used in NLP and lexical re-source implementation. Even if attempts of standardization are being developed, discordance points are still present. In this paper we consider a problematic semantic role, the Instrument role, which presents differences in definition and causes prob-lems of attribution. Particularly, it is not clear whether to assign this role to inani-mate entities occurring as subjects or not. This problem is especially relevant 1- be-cause of its treatment in practical annota-tion and semantic role labeling, 2- because it affects the whole definition of seman-tic roles. We propose arguments to sustain that inanimate nouns denoting instruments in subject positions are not instantiations of Instrument role, but are Cause, Agent or Theme. Ambiguities in the annotation of these cases are due to confusion between semantic roles and ontological types.
|
Semantically annotated resources have become widely used and requested in the field of Natural Language Processing, growing as a productive research area. This trend can be confirmed by looking at the repeated attempts in the implementation of annotated resources (FrameNet, VerbNet, Propbank, SALSA, LIRICS, SensoComune) and in the task of automatic Semantic Role Labeling (Gildea and Jurafsky 2002, Surdeanu et al. 2007, M`arquez et al. 2008, Lang and Lapata 2010, Titov and Klementiev 2012 among others). Since their first introduction by Fillmore (1967), semantic roles have been described and defined in many different ways, with different sets and different level of granularity - from macro-roles (Dowty 1991) to frame-specific ones (Fillmore et al. 2002). In order to reach a common standard of number and definition, the LIRICS (Linguistic Infrastructure for Interoperable ResourCes and Systems) project has recently evaluated several approaches for semantic role annotation and proposed an ISO (International Organization for Standardization) ratified standard that enables the exchange and reuse of (multilingual) language resources. In this paper we examine some problematic issues in semantic role attribution. We will highlight a case, the Instrument role, whose definition and designation should be, in our opinion, reconsidered. The topic is particularly relevant since there is difference in its treatment in different lexical resources and since the theoretical debate is still lively. Moreover, this matter highlights aspects of the nature of semantic roles, relevant both for their theoretical definition and for practical annotation, such as the difference between semantic roles and ontological types. The former refer to the role of participants in the particular event described by the linguistic utterance, the latter to the inherent properties of the entity. We argue that this is a main point in the annotation, because, even in the recent past, roles have been frequently tagged according to the internal properties of the entities involved, not, as it should be, because of their role in the particular event described. This analysis arose from the first step of the implementation of the Senso Comune resource (Vetere et al. 2012). With the aim to provide it with semantic roles, a first annotation experiment was conducted to check the reliability of the set and the annotation procedure (Jeˇzek et al. 2014). The dataset was composed of 66 examples without disambiguation, 3 for 22 target verbs, and it was annotated for semantic roles by 8 annotators. They were instructed with a guideline in which a set of 24 coarse-grained roles was defined, with examples and a taxonomy. During the evaluation process, the major cases of disagreement were highlighted. The present study is based on the evidence coming from these data: the Instrument role caused several misunderstandings (see also Varvara 2013). Nevertheless, our analysis will look primarily at examples from literature and other resources in order to rethink this role and to reach a standardization. We propose to consider what are called instrument subjects (Alexiadou and Sch¨afer 2006) as instances of three different roles (Cause, Agent and Theme) rather than as Instrument.
|
In this paper we have shown how theoretical and data analysis can be mutually improved by each other. Literature has offered critical discussion about the Instrument role and the case of instru-ment subjects, a discussion that can be useful for the definition and annotation of semantic roles in the implementation of lexical resources. More-over, analysis of annotated data can reveal fal-lacies in the reliability of the set, coming back from application to theoretical topics. At last, our study highlights the importance of distinguishing between semantic roles - relational notions be-longing to the level of linguistic representation -and ontological types, which refer to internal qual-ities of real-world entities. We believe that this topic, because of its importance, should be taken into consideration for a more complete treatment in future work.
| 7
|
Lexical and Semantic Resources and Analysis
|
75_2014
| 2,014
|
Simonetta Vietri
|
The Italian Module for NooJ
|
ENG
| 1
| 1
| 1
|
Università di Salerno
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Salerno
|
This paper presents the Italian mod-ule for NooJ. First, we will show the basic linguistic resources: dictionaries, inflectional and derivational grammars, syntactic gram-mars. Secondly, we will show some results of the application of such linguistic resources: the annotation of date/time patterns, the processing of idioms, the extraction and the annotation of transfer predicates.
|
NooJ is a development environment used to construct large-coverage formalized descriptions of natural languages, and apply them to corpora, in real time. NooJ, whose author is Max Silberztein (Silberztein 2003-), is a knowledge-based system that makes use of huge linguistic resources. Dictionaries, combined with morphosyntactic grammars, are the basic linguistic resources without which it would be impossible to perform a text analysis. The system includes various modules for more than twenty languages, among them Italian (.nooj4nlp.net). Most of the Italian linguistic resources are completely new. The goal of the NooJ project is twofold: to provide tools allowing linguists to implement exhaustive descriptions of languages, and to design a system which processes texts in natural language (see Silberztein 2014). NooJ consists of higher and higher linguistics levels: tokenization, morphological analysis, disambiguation, named entity recognition, syntactic parsing1. Unlike other systems, for example TreeTagger, developed by Helmut Schmidt (1995) 2 , NooJ is not a tagger, but the user can freely build disambiguation grammars and apply them to texts. Section 2 describes the Italian dictionary and the inflectional/derivational grammars associated with it. Section 3 shows the extraction of date/time patterns, section 4 the parsing of idioms. Section 5 describes the XML annotation and extraction of transfer predicates.
|
The application of the Italian module to a corpus of 100MB (La Stampa 1998) produced the following results: 33,866.028 tokens, 26,785.331 word forms. The unknown tokens are loan words, typos, acronyms, alterates8. The Italian module consists of exhaustive dictionaries/ grammars formally coded and manually built on those distributional and morphosyntactic principles as defined within the Lexicon- Grammar framework. Such a lingware (a) constitutes an invaluable linguistic resource because of the linguistic precision and complexity of dictionaries/grammars, (b) can be exploited by the symbolic as well as the hybrid approach to Natural Language Processing. The linguistic approach to NLP still constitutes a valid alternative to the statistical method that requires the (not always reliable) annotation of large corpora. If the annotated data contain errors, those systems based on them will produce inaccurate results. Moreover, corpora are never exhaustive descriptions of any language. On the other hand, formalized dictionaries/ grammars can be enriched, corrected and maintained very easily. Silberztein (2014) contains a detailed discussion on the limits, errors and naïveté of the statistical approach to NLP. The Italian module for NooJ constituted the basis of several research projects such as Elia et al. (2013), Monti et al. (2013), di Buono et al. (2014), Maisto et al. (2014). Therefore, it has been tested, verified and validated. The results constitute the basis for the updating of the module itself. Ultimately, the lexical resources of the Italian module can be easily exported into any format usable by other systems.
| 7
|
Lexical and Semantic Resources and Analysis
|
76_2015
| 2,015
|
Giovanni Colavizza, Fréderic Kaplan
|
On Mining Citations to Primary and Secondary Sources in Historiography
|
ENG
| 2
| 0
| 0
|
EPFL
| 1
| 1
| 1
| 2
|
Giovanni Colavizza, Fréderic Kaplan
| 2
|
Giovanni Colavizza, Fréderic Kaplan
|
Switzerland
|
Lausanne
|
We present preliminary results from the Linked Books project, which aims at analysing citations from the historiography on Venice. A preliminary goal is to extract and parse citations from any location in the text, especially footnotes, both to primary and secondary sources. We detail a pipeline for these tasks based on a set of classifiers, and test it on the Archivio Veneto, a journal in the domain.
|
The Linked Books project is part of the Venice Time Machine.}, a joint effort to digitise and study the history of Venice by digital means. The project goal is to analyse the history of Venice through the lens of citations, by network analytic methods. Such research is interesting because it could unlock the potential of the rich semantics of the use of citations in humanities. A preliminary step is the extraction and normalization of citations, which is a challenge in itself. In this paper we present the first results on this last topic, over a corpus of journals and monographs on the history of Venice, digitised in partnership with the Ca' Foscari Humanities Library and the Marciana Library. Our contribution is three-fold. First, we address the problem of extracting citations in historiography, something rarely attempted before. Secondly, we extract citation from footnotes, with plain text as input. Lastly, we deal at the same time with two different kind of citations: to primary and to secondary sources. A primary source is a documentary evidence used to support a claim, a secondary source is a scholarly publication AUTHOR. In order to solve this problem, we propose a pipeline of classifiers dealing with citation detection, extraction and parsing. The paper is organised as follows: a state of the art in Section 2 is followed by a methodological section explaining the pipeline and applied computational tools. A section on experiments follows, conclusions and future steps close the paper.
|
We presented a pipeline for recognizing and parsing citations to primary and secondary sources from historiography on Venice, with a case study on the Archivio Veneto journal. A first filtering step allows us to detect text blocks likely to contain citations, usually footnotes, by a SVM classifier trained on a simple set of morphological features. We then detect citation boundaries and macro-categories (to primary and secondary sources) using more rich features and CRFs. The last step in our pipeline is the fine-grained parsing of each extracted citation, in order to prepare them for further processing and analysis. In the future we plan to design more advanced feature sets, first of all considering text format features. Secondly, we will implement the next package of our chain: an error-tolerant normalizer which will uniform all citations to the same primary or secondary source within a publication, as a means to minimise the impact of classification errors during previous steps.
| 7
|
Lexical and Semantic Resources and Analysis
|
77_2015
| 2,015
|
Tobias Horsmann, Torsten Zesch
|
Effectiveness of Domain Adaptation Approaches for Social Media \POS~Tagging
|
ENG
| 2
| 0
| 0
|
University of Duisburg-Essen
| 1
| 1
| 1
| 2
|
Tobias Horsmann, Torsten Zesch
| 0
|
0
|
Germany
|
Duisburg
|
We compare a comprehensive list of domain adaptation approaches for POS tagging of social media data. We find that the most effective approach is based on clustering of unlabeled data. We also show that combining different approaches does not further improve performance. Thus, POS tagging of social media data remains a challenging problem.
|
Part-of-Speech (PoS) tagging of social media data is still challenging. Instead of tagging accuracies in the high nineties on newswire data, on social media we observe significantly lower numbers. This performance drop is mainly caused by the high number of out-of-vocabulary words in social media, as authors neglect orthographic rules (Eisenstein, 2013). However, special syntax in social media also plays a role, as e.g. pronouns at the beginning of sentence are often omitted like in “went to the gym” where the pronoun ’I’ is implicated (Ritter et al., 2011). To make matters worse, existing corpora with PoS annotated social media data are rather small, which has led to a wide range of domain adaptation approaches being explored in the literature. There are two main paradigms: First, adding more labeled training data by adding foreign or machine-generated data (Daumé III, 2007; Ritter et al., 2011). Second, incorporating external knowledge or guiding the machine learning algorithm to extract more knowledge from the existing data (Ritter et al., 2011; Owoputi et al., 2013). The first strategy affects from which data is learned, the second one what is learned. Using more training data Usually there is only little PoS annotated data from the social media domain, so just using re-training on domain-specific data does not suffice for good performance. Mixed re-training adds additional annotated text from foreign domains to the training data. In case there is much more foreign data than social media data, Oversampling (Daumé III, 2007) can be used to adjust for the difference in size. Finally, Voting can be used to provide more social media training data by relying on multiple already existing taggers. Using more knowledge Instead of adding more training data, we can also make better use of the existing data in order to lower the out-ofvocabulary rate. PoS dictionaries provide for instance information about the most frequent tag of a word. Another approach is clustering which group words according to their distributional similarity (Ritter et al., 2011). In this paper, we evaluate the potential of each approach for solving the task.
|
In this paper, we analyzed domain adaptation approaches for improving PoS tagging on social media text. We confirm that adding more manually annotated in-domain data is highly effective, but annotation costs might often prevent application of this strategy. Adding more out-domain training data or machine-tagged data is less effective than adding more external knowledge in our experiments. We find that clustering is the most effective individual approach. However, clustering based on very large corpora did not further increase accuracy. As combination of strategies did only yield minor improvements, clustering seems to dominate the other strategies.
| 6
|
Sentiment, Emotion, Irony, Hate
|
78_2015
| 2,015
|
Sabrina Stehwien, Sebastian Padó
|
Generalization in Native Language Identification: Learners versus Scientists
|
ENG
| 2
| 1
| 1
|
Universität Stuttgart
| 1
| 1
| 1
| 2
|
Sabrina Stehwien, Sebastian Padó
| 0
|
0
|
Germany
|
Stuttgart
|
Native Language Identification (NLI) is the task of recognizing an author's native language from text in another language. In this paper, we consider three English learner corpora and one new, presumably more difficult, scientific corpus. We find that the scientific corpus is only about as hard to model as a less-controlled learner corpus, but cannot profit as much from corpus combination via domain adaptation. We show that this is related to an inherent topic bias in the scientific corpus: researchers from different countries tend to work on different topics.
|
Native Language Identification (NLI) is the task of recognizing an author's native language (L1) from text written in a second language (L2). NLI is important for applications such as the detection of phishing attacks AUTHOR or data collection for the study of L2 acquisition AUTHOR. State-of-the-art methods couch NLI as a classification task, where the classes are the L1 of the author and the features are supposed to model the effects of the author's L1 on L2 (language transfer). Such features may be of varying linguistic sophistication, from function words and structural features AUTHOR on one side to N-grams over characters, words and POS tags AUTHOR on the other side. Like in many NLP tasks, there are few large datasets for NLI. Furthermore, it is often unclear how well the models really capture the desired language transfer properties rather than topics. The widely-used International Corpus of Learner English (ICLE, AUTHOR) has been claimed to suffer from a topic bias AUTHOR: Authors with the same L1 prefer certain topics, potentially due to the corpus collection strategy (from a small set of language courses). As a result, AUTHOR question the generalization of NLI models to other corpora and propose the use of domain adaptation. In contrast, AUTHOR report their ICLE-trained models to perform well on other learner corpora. This paper extends the focus to a novel corpus type, non-native scientific texts from the ACL Anthology. These are substantially different from learner corpora: (a) most authors have a good working knowledge of English; and (b) due to the conventions of the domain, terminology and structure are highly standardized AUTHOR. A priori, we would believe that NLI on the ACL corpus is substantially more difficult. Our results show, however, that the differences between the ACL corpus and the various learner corpora are more subtle: The ACL corpus is about as difficult to model as some learner corpora. However, generalization to the ACL corpus is more difficult, due to its idiosyncratic topic biases.
|
This study investigated the generalizability of NLI models across learner corpora and a novel corpus of scientific ACL documents. We found that generalizability is directly tied to corpus properties: well-controlled learner corpora (TOEFL-11, ICLE) generalize well to one another AUTHOR. Together with the minor effect on performance of removing topic-related features, we conclude that topic bias within a similar text type does not greatly affect generalization. At the same time, ``classical'' learner corpora do not generalize well to less-controlled learner corpora (Lang-8) or scientific corpora (ACL). Lang-8 and ACL show comparable performance, which seems surprising given the small size of the ACL corpus and its quite different nature. Our analysis shows that the ACL corpus exhibits an idiosyncratic topic bias: scientists from different countries work on different topics, which is reflected in the models. As a result, the improvements that Lang-8 can derive from domain adaptation techniques carry over to the ACL corpus only to a limited extent. Nevertheless, the use of mSDA can halve the amount of ACL data necessary for the same performance, which is a promising result regarding the generalization to other low-resource domains.
| 8
|
Learner Corpora and Language Acquisition
|
79_2015
| 2,015
|
Fabio Celli, Luca Polonio
|
Facebook and the Real World:Correlations between Online and Offline Conversations
|
ENG
| 2
| 0
| 0
|
Università di Trento
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
Are there correlations between language usage in conversations on Facebook and face to face meetings? To answer this question, we collected transcriptions from face to face multi-party conversations between 11 participants, and retrieved their Facebook threads. We automatically annotated the psycholinguistic dimensions in the two domains by means of the LIWC dictionary, and we performed correlation analysis. Results show that some Facebook dimensions, such as “likes” and shares, have a counterpart in face to face communication, in particular the number of questions and the length of statements. The corpus we collected has been anonymized and is available for research purposes.
|
In recent years we had great advancements in the analysis of communication, in face to face meetings as well as in Online Social Networks (OSN) (Boyd and Ellison, 2007). For example, resources for computational psycholinguistics like the Linguistic Enquiry Word Count (LIWC) (Tausczik and Pennebaker, 2010), have been applied to OSN like Facebook and Twitter for personality recognition tasks (Golbeck et al., 2011) (Schwartz et al., 2013) (Celli and Polonio, 2013) (Quercia et al., 2011). Interesting psychological research analyzed the motivations behind OSN usage (Gosling et al., 2011) (Seidman, 2013) and whether user profiles in OSN reflect acual personaliy or a selfidealization (Back et al., 2010). Also Conversation Analysis (CA) of face to face meetings, that has a long history dating back to the ’70s (Sacks et al., 1974), has taken advantage of computational techniques, addressing detection of consensus in business meetings (Pianesi et al., 2007), multimodal personality recognition (Pianesi et al., 2008) and dectection of conflicts from speech (Kim et al., 2012). In this paper we make a comparison of the linguistic behaviour of OSN users both online and in face to face meetings. To do so, we collected Facebook data from 11 volunteer users, who participated to an experimental setting where we recorded face to face multiparty conversations of their meetings. Our goal is to discover relationships between a rich set of psycholinguistic dimensions (Tausczik and Pennebaker, 2010) extracted from Facebook metadata and meeting transcriptions. Our contributions to the research in the fields on Conversation Analysis and Social Network Analysis are: the release of a corpus of speech transcriptions aligned to Facebook data in Italian and the analysis of correlations between psycholinguistic dimensions in the two settings. The paper is structured as follows: in section 2 we describe the corpora and the data collection, in section 3 we explain the method adopted and report the results, in section 4 we draw some conclusions.
|
In this paper, we attempted to analyse the correlations between psycholinguistic dimensions observed in Facebook and face to face meetings. We found that the type of words significantly correlated to both settings are related to strong emotions (anger and anxiety), We suggest that these are linguistic dimensions difficult to control and tend to be constant in different settings. Crucially, we also found that likes received on Facebook are correlated to the tendency to ask questions in meetings. Literature on impression formation/management report that people with high self-esteem in meetings will elicit self-esteem enhancing reactions from others (Hass, 1981). This could explain the link between the tendency to ask questions in meetings with unknown people and the tendency to post contents that elicit likes in Facebook. Moreover, the tendency to ask questions in spoken conversations is correlated to observed emotional stability (Mairesse et al., 2007) and that emotionally stable users in Twitter tend to have more replies in conversations than neurotic users (Celli and Rossi, 2012). We suggest that the correlation we found can be partially explained by these two privious findings. Another very interesting finding is that the tendency to be reshared on Facebook correlates to the tendency to speak a lot in face to face meetings. Again, literature about impression formation/management can explain this, because people with high self-esteem tend to engage people and to speak a lot, while people adopting defensive strategies tend to be assertive less argumentative. In linguistics it is an open debate whether virality depends from the influence of the source (Zaman et al., 2010) or the content of message being shared (Guerini et al., 2011) (Suh et al., 2010). In particular, the content that evokes higharousal positive (amusement) or negative (anger or anxiety) emotions is more viral, while content that evokes low arousal emotions (sadness) is less viral (Berger and Milkman, 2012). Given that the tendency to express both positive and negative feelings and emotions in spoken conversations is a feature of extraversion (Mairesse et al., 2007), and that literature in psychology links the tendency to speak a lot to extraversion (Gill and Oberlander, 2002), observed neuroticism (Mairesse et al., 2007) and dominance (Bee et al., 2010). we suggest that the correlation between long turns in meetings and highly shared contents in Facebook may be due to extraversion, dominance and high self-esteem. We are going to release the dataset we collected on demand.
| 9
|
Textual Genres & Literature Linguistics
|
80_2015
| 2,015
|
Flavio Massimiliano Cecchini, Elisabetta Fersini
|
Word Sense Discrimination: A gangplank algorithm
|
ENG
| 2
| 1
| 0
|
Università di Milano Bicocca
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Milan
|
In this paper we present an unsupervised, graph-based approach for Word Sense Discrimination. Given a set of text sentences, a word co-occurrence graph is derived and a distance based on Jaccard index is defined on it; subsequently, the new distance is used to cluster the neighbour nodes of ambiguous terms using the concept of ``gangplanks'' as edges that separate denser regions (``islands'') in the graph. The proposed approach has been evaluated on a real data set, showing promising performance in Word Sense Discrimination.
|
Word Sense Disambiguation is a challenging research task in Computational Linguistics and Natural Language Processing. The main reasons behind the difficulties of this task are ambiguity and arbitrariness of human language: just depending on its context, the same term can assume different interpretations, or senses, in an unpredictable manner. In the last decade, three main research directions have been investigated AUTHOR: 1) supervised AUTHOR, 2) knowledge-based AUTHOR and 3) unsupervised Word Sense Disambiguation AUTHOR, where the last approach is better defined as ``induction'' or ``discrimination''. In this paper we focus on the automatic discovery of senses from raw text, by pursuing an unsupervised Word Sense Discrimination paradigm. We are interested in the development of a method that can be generally independent from the register or the linguistic well-formedness of a text document, and, given an adequate pre-processing step, from language. Among the many unsupervised research directions, i.e. context clustering AUTHOR, word clustering AUTHOR, probabilistic clustering AUTHOR and co-occurrence graph clustering AUTHOR , we committed to the last one, based on the assumption that word co-occurrence graphs can reveal local structural properties tied to the different senses a word might assume in different contexts. Given a global word co-occurrence graph, the main goal is to exploit the subgraph induced by the neighbourhood of the word to be disambiguated (a ``word cloud''). There, we define separator edges (``gangplanks'') and use them as the means to cluster the word cloud: the fundamental assumption is that, in the end, every cluster will correspond to a different sense of the word. The paper is organized as follows. In section grafigrafigrafi we explain how we build our co-occurrence graph and word clouds by means of a weighted Jaccard distance. In section clasteri we describe the gangplank algorithm. In section tuitti we present the algorithm's results on our data set and their evaluation. In section parenti we give a brief overview on related work and section fine presents some short conclusions.
|
The main challenge we encountered for our word sense discrimination algorithm was the difficulty of handling a small-world graph. Apart from that, we have to notice that word clustering just represents the last step of a process that starts with pre-processing and tokenization of a text, which are both mostly of supervised nature. Our future goals will be to investigate the relations between text pre-processing and clustering results, and how to render the whole process completely unsupervised.
| 7
|
Lexical and Semantic Resources and Analysis
|
81_2015
| 2,015
|
Manuela Speranza, Anne-Lyse Minard
|
Cross-language projection of multilayer semantic annotation in the NewsReader Wikinews Italian Corpus (WItaC)
|
ENG
| 2
| 2
| 1
|
Fondazione Bruno Kessler
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
In this paper we present the annotation of events, entities, relations and coreference chains performed on Italian translations of English annotated texts. As manual annotation is a very expensive and time-consuming task, we devised a cross-lingual projection procedure based on the manual alignment of annotated elements.
|
The NewsReader Wikinews Italian Corpus (WItaC) is a new Italian annotated corpus consisting of English articles taken from Wikinews) is a collection of multilingual online news articles written collaboratively in a wiki-like manner.} and translated into Italian by professional translators. The English corpus was created and annotated manually within the NewsReader project,} whose goal is to build a multilingual system able to reconstruct storylines across news articles in order to provide policy and decision makers with an overview of what happened, to whom, when, and where. Semantic annotations in the NewsReader English Wikinews corpus span over multiple levels, including both intra-document annotation (entities, events, temporal information, semantic roles, and event and entity coreference) and cross-document annotation (event and entity coreference). As manual annotation is a very expensive and time-consuming task, we devised a procedure to automatically project the annotations already available in the English texts onto the Italian translations, based on the manual alignment of the annotated elements in the two languages. The English corpus, taken directly from Wikinews, together with WItaC, being its translation, ensures access to non-copyrighted articles for the evaluation of the NewsReader system and the possibility of comparing results in the two languages at a finegrained level. WItaC aims at being a reference for the evaluation of storylines reconstruction, a task requiring several subtasks, e.g. semantic role labeling (SRL) and event coreference. In addition, it is part of a cross-lingually annotated corpus, thus enabling for experiments across different languages. The remainder of this article is organized as follows. We review related work in Section~sec:soa. In Section~sec:annotEnglish we present the annotations available in the English corpus used as the source for the projection of the annotation. In Section 2 we detail some adaptations of the guidelines specific for Italian. In Sections3 and 4 we describe the annotation process and the resulting WItaC corpus. Finally, we conclude presenting some future work.
|
We have presented WItaC, a new corpus consisting of Italian translations of English texts annotated using a cross-lingual projection method. We acknowledge some influence of English in the translated texts (for instance, we noticed an above-average occurrence of noun modifiers, as in “dipendenti Airbus”) and in the annotation (for instance, annotators might have been influenced by English in the identification of light verb constructions in the Italian corpus). On the other hand, this method enabled us not only to considerably reduce the annotation effort, but also to add a new cross-lingual level to the NewsReader corpus; in fact, we now have two annotated corpora, in English and Italian, in which entity and event instances (in total, over 1,600) are shared. In the short-term we plan to manually revise the projected relations and add the language-specific attributes. We also plan to use the corpus as a dataset for a shared evaluation task and afterwards we will make it freely available from the website of the HLT-NLP group at FBK6 and from the website of the NewsReader project.
| 7
|
Lexical and Semantic Resources and Analysis
|
82_2015
| 2,015
|
Lorenzo Gregori, Andrea Amelio Ravelli, Alessandro Panunzi
|
Linking dei contenuti multimediali tra ontologie multilingui: i verbi di azione tra IMAGACT e BabelNet
|
ITA
| 3
| 0
| 0
|
Università di Firenze
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Florence
|
The study presented here concerns the link between two multilingual and multimedia resources, BabelNet and IMAGACT. In particular, the linking experiment has as its object the videos of the ontology of the IMAGACT action and the respective verbal lexical entries of BabelNet. The task was executed through an algorithm that operates on the basis of the lexical information present in the two resources. Linking results show that it is possible to make an extensive link between the two ontologies. Such a link is desirable in the sense of providing a rich and multimedia database for complex deambiguation tasks of the reference of the action verbs and automatic and assisted translation of the phrases containing them.
|
Ontologies are widely used tools to represent linguistic resources on the web and make them exploitable by methods of automatic processing of natural language. The availability of shared formal languages, such as RDF and OWL, and the development of high-level ontologies, such as lemon (McCrae et al., 2011), are leading to a unified methodology for publishing language resources in the form of open data2. The representation of information through ontologies is not sufficient for the construction of the semantic network that is the basis of the new web paradigms. The interconnection of information and, consequently, the mapping and linking between different ontologies become essential aspects for access to knowledge and its enrichment, as evidenced by the increasing development of research in this field (Otero-Cerdeira et al., 2015). The need to maximize connections between different resources must be compared with the fact that each ontology is built with different criteria, which refer to different theoretical frameworks. In this context the instance matching becomes particularly relevant, since it allows to connect resources without mapping ontological entities (Castano et al., 2008; Nath et al., 2014). In this article we present a hypothesis of connection between two linguistic ontology, BabelNet (Navigli and Ponzetto, 2012a) and IMAGACT (Moneglia et al., 2014a), both multimedia, multilingual and exploitable for translation and disambiguation tasks (Moro and Navigli, 2015; Russo et al., 2013; Moneglia, 2014). The connection between ontologies takes place through the visual component of IMAGACT, i.e. the representation of actions by means of prototype scenes.
|
Although a fine-tuning of the parameters has not yet been done (for which a wider set test is required), the good results obtained by this experiment open the possibility of connecting the two ontologies through the scenes of IMAGACT, in order to enrich both resources. On the one hand, IMAGACT's videos could represent BS's ational concepts; on the other hand, IMAGACT would be enriched with BabelNet's translation information. Moreover, from Babelfy's observation (Moro et al., 2014), the word sense engine disambiguation and entity linking derived from BabelNet, it became apparent that the linking hypothesis proposed here would have a significant impact on the expressiveness of the visual representation of the sentences, with the association of images with names and videos with verbs. Finally, it is important to note that both BabelNet and IMAGACT are expanding resources: since algorithms exploit translation equivalents, the results can be more precise as the languages and the terms considered increase.
| 7
|
Lexical and Semantic Resources and Analysis
|
83_2015
| 2,015
|
Marco Angster
|
Bolzano/Bozen Corpus: Coding Informationabout the Speaker in IMDI Metadata Structure
|
ENG
| 1
| 0
| 0
|
Libera Università di Bolzano
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Bolzano
|
The paper introduces a new collection of spoken data (the Bolzano/Bozen Corpus) available through The Language Archive of Max Planck Institute of Nijmegen. It shows an example of the issues encountered in accommodating information of an existent corpus into IMDI metadata structure. Finally, it provides preliminary reflections on CMDI: a component-based metadata format.
|
Once a Language Resource (LR) exists it should be used, and this entails several problems. First of all it must be available to the public – which may be the academic community, but also industry or institutions – and, given that producing a LR is an expensive task, it would be ideal that a LR could be exploited beyond the originally intended public. The re-usability of a LR is possible provided that it is conceived following shared standards for formats, tagging and metadata. In this paper I focus on metadata structures, in particular I introduce a collection of spoken data (the Bolzano/Bozen Corpus) and I show the problems encountered in fitting the information available about the speakers sampled in the data in IMDI metadata structure. The paper aims at providing an example of how flexible are the considered metadata structures in accommodating information of existent collections of data and in adapting to the needs of the researcher in sociolinguistics.
|
In this paper, I have shown an example of the difficulty of using a metadata structure to accommodate information on speaker’s linguistic background. I have taken into account the case of Bolzano Bozen Corpus and two sociolinguistically oriented projects (KOMMA, Kontatto) hosted on The Language Archive. IMDI, the former standard of TLA, is now an outdated tool and is too rigid to adapt to specific purposes. The new standard CMDI provides huge possibilities to the research community to define metadata formats tailored on specific needs. However CMDI does not provide until now satisfactory profiles and components for sociolinguistic studies, especially as far as background information about the speaker is concerned. Furthermore, direct contribution to CMDI components is restricted to CLARIN centres and in some crucial cases even categories available in CMDI are unsatisfactory and must be proposed to the relevant (and closed) DCR. The case I have proposed shows on the one hand the possibilities of CMDI. However, on the other hand, the difficulty to contribute to CMDI profiles and components from outside CLARIN may lead to the uncomfortable condition of having huge amounts of data with unsatisfactory metadata, which have low possibilities to be re-used, failing one of the main objectives of a standardisation initiative.
| 7
|
Lexical and Semantic Resources and Analysis
|
84_2015
| 2,015
|
Paolo Dragone, Pierre Lison
|
An Active Learning Approach to the Classification of Non-Sentential Utterances
|
ENG
| 2
| 0
| 0
|
Sapienza Università di Roma, University of Oslo
| 2
| 1
| 0
| 1
|
Pierre Lison
| 0
|
0
|
Norway, Italy
|
Oslo, Rome
|
This paper addresses the problem of classification of non-sentential utterances (NSUs). NSUs are utterances that do not have a complete sentential form but convey a full clausal meaning given the dialogue context. We extend the approach of Fernández et al. (2007), which provide a taxonomy of NSUs and a small annotated corpus extracted from dialogue transcripts. This paper demonstrates how the combination of new linguistic features and active learning techniques can mitigate the scarcity of labelled data. The results show a significant improvement in the classification accuracy over the state-of-the-art
|
In dialogue, utterances do not always take the form of complete, well-formed sentences with a subject, a verb and complements. Many utterances – often called non-sentential utterances, or NSUs for short – are fragmentary and lack an overt predicate. Consider the following examples from the British National Corpus: A: How do you actually feel about that? B: Not too happy. [BNC: JK8 168-169] A: They wouldn’t do it, no. B: Why? [BNC: H5H 202-203] A: [...] then across from there to there. B: From side to side. [BNC: HDH 377-378] Despite their ubiquity, the semantic content of NSUs is often difficult to extract automatically. Non-sentential utterances are indeed intrinsically dependent on the dialogue context for their interpretation – for instance, the meaning of ”why” in the example above is impossible to decipher without knowing what precedes it. This paper describes a new approach to the classification of NSUs. The approach builds upon the work of Fernández et al. (2007), which present a corpus of NSUs along with a taxonomy and a classifier based on simple features. In particular, we show that the inclusion of new linguistic features and the use of active learning provide a modest but significant improvement in the classification accuracy compared to their approach. The next section presents the corpus used in this work and its associated taxonomy of NSUs. Section 3 describes our classification approach (extracted features and learning algorithm). Section 4 finally presents the empirical results and their comparison with the baseline.
|
This paper presented the results of an experiment in the classification of non-sentential utterances, extending the work of Fernández et al. (2007). The approach relied on an extended feature set and active learning techniques to address the scarcity of labelled data and the class imbalance. The evaluation results demonstrated a significant improvement in the classification accuracy. The presented results also highlight the need for a larger annotated corpus of NSUs. In our view, the development of such a corpus, including new dialogue domains and a broader range of conversational phenomena, could contribute to a better understanding of NSUs and their interpretation. Furthermore, the classification of NSUs according to their type only constitutes the first step in their semantic interpretation. Dragone and Lison (2015) focuses on integrating the NSU classification outputs for natural language understanding of conversational data, building upon Ginzburg (2012)’s formal theory of conversation.
| 13
|
Multimodal
|
85_2015
| 2,015
|
Serena Pelosi
|
SentIta and Doxa: Italian Databases and Tools for Sentiment Analysis Purposes
|
ENG
| 1
| 1
| 1
|
Università di Salerno
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Salerno
|
This reserach presents SentIta, a Sentiment lexicon for the Italian language, and Doxa, a prototype that, interacting with the lexical database, applies a set of linguistic rules for the Document-level Opinionated teXt Analysis. Details about the dictionary population, the semantic analysis of texts written in natural language and the evaluation of the tools will be provided in the paper.
|
Through online customer review systems, Internet forums, discussion groups and blogs, consumers are allowed to share positive or negative information than can influence in different ways the purchase decisions and model the buyer expectations, above all with regard to experience goods (Nakayama et al., 2010); such as hotels (Ye et al., 2011), restaurants (Zhang et al., 2010), movies (Duan et al., 2008), books (Chevalier and Mayzlin, 2006) or videogames (Zhu and Zhang, 2006). The consumers, as Internet users, can freely share their thoughts with huge and geographically dispersed groups of people, competing, this way, with the traditional power of marketing and advertising channels. Differently from the traditional word-of-mouth, which is usually limited to private conversations, the user generated contents on Internet can be directly observed and described by the researchers. The present paper will provide in Section 2 a concise overview on the most popular techniques for both sentiment analysis and polarity lexicon propagation. Afterward, it will describe in Section 3 the method used to semi-automatically populate SentIta, our Italian sentiment lexicon and in Section 4 the rules exploited to put the words’ polarity in context . In the end, Sections 5 will describe our opinion analyzer Doxa, that performs documentlevel sentiment analysis, sentiment role labeling and feature-based opinion summarization.
|
In the present paper we underlined that the social and economic impact of the online customer opinions and the huge volume of raw data available on the web, concerning users point of views, offer new opportunities both to marketers and researchers. Indeed, sentiment analysis applications, able to go deep in the semantics of sentences and texts, can play a crucial role in tasks like web reputation monitoring, in social network analysis, in viral tracking campaigns, etc... Therefore, we presented SentIta, a semiautomatic Italian Lexicon for Sentiment Analysis, and Doxa, a Document-level Opinionated teXt Analyzer that exploits finite-state technologies to go through the subjective dimension of user generated contents.
| 6
|
Sentiment, Emotion, Irony, Hate
|
86_2015
| 2,015
|
Giuseppe Castellucci, Danilo Croce, Roberto Basili
|
A Graph-based Model of Contextual Information in Sentiment Analysis over Twitter
|
ENG
| 3
| 0
| 0
|
Università di Roma Tor Vergata
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome
|
Analyzing the sentiment expressed by short messages available in Social Media is challenging as the information when considering an instance is scarce. A fundamental role is played by Contextual information that is available when interpreting a message. In this paper, a Graph-based method is applied: a graph is built containing the contextual information needed to model complex interactions between messages. A Label Propagation algorithm is adopted to spread polarity information from known polarized nodes to the others.
|
Sentiment Analysis (SA) AUTHOR faces the problem of deciding whether a text expresses a sentiment, e.g. positivity or negativity. Social Media are observed to measure the sentiment expressed in the Web about products, companies or politicians. The interest in the analysis of tweets led to the definition of highly participated challenges, e.g. AUTHOR or AUTHOR. Machine Learning (ML) approaches are often adopted to classify the sentiment AUTHOR, where specific representations and hand-coded resources AUTHOR are used to train some classifier. As tweets are very short, the amount of available information for ML approaches is in general not sufficient for a robust decision. A valid strategy AUTHOR exploits Contextual information, e.g. the reply-to chain, to support a robust sentiment recognition in online discussions. In this paper, we foster the idea that Twitter messages belong to a network where complex interactions between messages are available. As suggested in AUTHOR, tweets can be represented in graph structures, along with words, hashtags or users. A Label Propagation algorithm AUTHOR can be adopted to propagate (possibly noisy) sentiment labels within the graph. In AUTHOR, it has been shown that such approach can support SA by determining how messages, words, hashtags and users influence each other. The definition of the graph is fundamental for the resulting inference, e.g. when mixing messages about different topics, sentiment detection can be difficult. We take inspiration from the contexts defined in AUTHOR. In AUTHOR no explicit relation between messages is considered. We, instead, build a graph where messages in the same context are linked each other and to the words appearing in them. Moreover, we inject prior polarity of words as available in a polarity lexicon AUTHOR. Experiments are carried out over a subset of the Evalita 2014 Sentipolc AUTHOR dataset, showing improvements in the polarity classification with respect to not using networked information. In the remaining, Section sec:approach presents our graph-based approach. In Section sec:exps we evaluate the proposed method with respect to a dataset in Italian and we derive the conclusions in Section sec:conclusion.
|
In this paper, the Contextual Graph is defined as a structure where messages can influence each other by considering both intra-context and extracontext links: the former are links between messages, while the latter serves to link messages in different contexts through shared words. The application of a Label Propagation algorithm confirms the positive impact of contextual information in the Sentiment Analysis task over Social Media. We successfully injected prior polarity information of words in the graph, obtaining further improvements. This is our first investigation in graph approaches for SA: we only adopted the MAD algorithm, while other algorithms have been defined, since (Zhu and Ghahramani, 2002) and they will be investigated in future works. Moreover, other contextual information could be adopted. Finally, other datasets should be considered, proving the effectiveness of the proposed method that does not strictly depend on the language of messages.
| 6
|
Sentiment, Emotion, Irony, Hate
|
87_2015
| 2,015
|
Fabrizio Esposito, Pierpaolo Basile, Francesco Cutugno, Marco Venuti
|
The CompWHoB Corpus:Computational Construction, Annotation and Linguistic Analysis of the White House Press Briefings Corpus
|
ENG
| 4
| 0
| 0
|
Università di Napoli Federico II, Università di Bari Aldo Moro, Università di Catania
| 3
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Naples, Bari, Catania
|
The CompWHoB (Computational White House press Briefings) Corpus, currently being developed at the University of Naples Federico II, is a corpus of spoken American English focusing on political and media communication. It represents a large collection of the White House Press Briefings, namely, the daily meetings held by the White House Press Secretary and the news media. At the time of writing, the corpus amounts to more than 20 million words, covers a period of time of twenty-one years spanning from 1993 to 2014 and it is planned to be extended to the end of the second term of President Barack Obama. The aim of the present article is to describe the composition of the corpus and the techniques used to extract, process and annotate it. Moreover, attention is paid to the use of the Temporal Random Indexing (TRI) on the corpus as a tool for linguistic analysis.
|
As political speech has been gaining more and more attention over recent years in the analysis of communication strategies, political corpora have become of paramount importance for the fulfilment of this objective. The CompWHoB Corpus, a spoken American English corpus currently being developed at the University of Naples Federico II, wants to meet the need for political language data, as it focuses on the political and media communication genre. This resource is a large collection of the transcripts of the White House Press Briefings, namely, the daily meetings held by the White House Press Secretary and the news media. As one of the main official channels of communication for the White House, briefings play indeed a crucial role in the administration communication strategies AUTHOR. The corpus currently amounts to more than 20 million words and spans from 1993 to 2014, thus covering a period of time of twenty-one years and five presidencies. Work is underway to extend the corpus so as to reach the end of the second term of President Barack Obama. Unlike other political corpora such as CORPS AUTHOR and the Political Speech Corpus of Bulgarian AUTHOR, the CompWHoB does not include monological situations, due to the inherent dialogical characteristics of the briefings. As other web corpora AUTHOR, the CompWHoB can be considered a web corpus AUTHOR, since its texts are directly extracted from The American Presidency Project website. Moreover, it should be pointed out that WHoB is a pre-existing specialized corpus AUTHOR annotated by using XML mark-up and mainly employed in the field of corpus linguistics. Thus, the aim of the present article is to describe how the corpus can be used as a future resource in different research fields such as computational linguistics, (political) linguistics, political science, etc. The paper is structured as follows: Section 2 gives an overview of the corpus. Section 3 describes the details of the corpus construction and annotation. The use of TRI on the corpus is then discussed in Section 4. Lastly, Section 5 concludes the paper.
|
At the time of writing, the CompWHoB Corpus is probably one of the largest political corpora mainly based on spontaneous spoken language. This feature represents one of its strongest points, as the linguistic analysis performed by employing the TRI has proved. As for the near future, two are our main goals: the first one is to make the process of structural annotation as much computational as possible by retrieving information from available political databases; the second one is to provide the corpus with syntactic parsing and improve the overall performance of the linguistic annotation process. In terms of accessibility, we intend to make the CompWHoB Corpus available via the CPQ web interface (Hardie, 2012) by the end of next year. For now, the fully annotated corpus is accessible and available on request.
| 7
|
Lexical and Semantic Resources and Analysis
|
88_2015
| 2,015
|
Raffaele Guarasci, Alessandro Maisto
|
New wine in old wineskins: a morphology-based approach to translate medical terminology
|
ENG
| 2
| 0
| 0
|
Università di Salerno
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Salerno
|
In this work we introduce the first steps toward the development of a machine translation system for medical terminology. We explore the possibility of basing a machine translation task in the medical domain on morphology. Starting from neoclassical formative elements, or confixes, we started building MedIta, a cross-language ontology of medical morphemes, aiming to offer a standardized medical consistent resource that includes distributional and semantic information of medical morphemes. Using this information, we have built an ontology-driven Italian-English machine translation prototype, based on a set of Finite State Transducers, and we have carried out an experiment on Orphanet medical corpus to evaluate the feasibility of this approach.
|
Automating Machine Translation (MT) of a technical language is a challenging task that requires an in-depth analysis both from a linguistic point of view and as regards the implementation of a complex system. This becomes even more complex in medical language. Indeed the translation of medical terminology must always be validated by a domain expert following official classification standards. For this reason currently there are no translation support tools specifically created for the medical domain. In this work we propose an MT system based on a set of Finite State Transducers that uses cross-language morpheme information provided by a lexical resource. The underlying idea is that in a technical language a morpho-semantic approach AUTHOR may be more effective than a probabilistic one in term-by-term translation tasks. Even though our approach could seem a bit ``old fashioned'', we must consider that proper nature of medical language, fully based by morphemes derived from neoclassical formative elements AUTHOR. Neoclassical formative elements are morphological elements that come into being from Latin and Greek words, they combine with each other following compositional morphology rules. Due to the heterogeneous nature of these elements, they have received different definitions, we prefer to use the term confixes, a morpheme with full semantic value, which has been predominantly used in the literature AUTHOR. In this work we focused only on word formation related to the medical domain.
|
In this work we presented a morphology-based machine translation prototype specifically suited for medical terminology. The prototype uses ontologies of morphemes and Finite State Transducers. Even though the approach may seem a little out-of-date, the preliminary results showed that it can work as well as a probabilistic system in such a specific domain. It is worth mentioning that at this early stage we tested the prototype only on samples, since the evaluation is an extremely time-consuming task: every translated term must be manually compared with one or more medical standards. Medical standards are often not aligned, therefore an Orpha-number (disease id) does not necessarily match a disease listed in ICD-10. Moreover, these resources are not easily usable in an automated way, therefore the evaluation should entirely be done manually.
| 20
|
In-domain IR and IE
|
89_2015
| 2,015
|
Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro
|
Entity Linking for Italian Tweets
|
ENG
| 3
| 1
| 0
|
Università di Bari Aldo Moro
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Bari
|
Linking entity mentions in Italian tweets to concepts in a knowledge base is a challenging task, due to the short and noisy nature of these short messages and the lack of specific resources for Italian. This paper proposes an adaptation of a general purpose Named Entity Linking algorithm, which exploits the similarity measure computed over a Distributional Semantic Model, in the context of Italian tweets. In order to evaluate the proposed algorithm, we introduce a new dataset of tweets for entity linking that we developed specifically for the Italian language.
|
In this paper we address the problem of entity linking for Italian tweets. Named Entity Linking (NEL) is the task of annotating entity mentions in a portion of text with links to a knowledge base. This task usually requires as first step the recognition of portions of text that refer to named entities (entity recognition). The linking phase follows, which usually subsumes the entity disambiguation, i.e. selecting the proper concept from a restricted set of candidates (e.g. New York city or New York state). NEL together with Word Sense Disambiguation, i.e. the task of associating each word occurrence with its proper meaning given a sense inventory, is critical to enable automatic systems to make sense of unstructured text. Initially developed for reasonably long and clean text, such as news articles, NEL techniques usually show unsatisfying performance on noisy, short and poorly written text constituted by microblogs such as Twitter. These difficulties notwithstanding, with an average of 500 billion posts being generated every day1 , tweets represent a rich source of information. Twitterbased tasks like user interest discovery, tweet recommendation, social/economical analysis, and so forth, could benefit from such a kind of semantic features represented by named entities linked to a knowledge base. Such tasks become even more problematic when the tweet analysis involves languages different from English. Specifically, in the context of Italian language, the lack of languagespecific resources and annotated tweet datasets complicates the assessment of NEL algorithms for tweets. Our main contributions to this problem are: • An adaptation of a Twitter-based NEL algorithm based on a Distributional Semantic Model (DSM-TEL), which needs no specific Italian resources since it is completely unsupervised (Section 3). • An Italian dataset of manually annotated tweets for NEL. To the best of our knowledge, this is the first Italian dataset of such a type. Section 2 reports details concerning the annotation phase and statistics about the dataset. • An evaluation of well known NEL algorithms available for Italian language on this dataset, comparing their performance with our DSMTEL algorithm in terms of both entity recognition and linking. Section 4 shows and analyses the results of that evaluation.
|
We tackled the problem of entity linking for Italian tweets. Our contribution is threefold: 1) we build a first Italian tweet dataset for entity linking, 2) we adapted a distributional-based NEL algorithm to the Italian language, and 3) we compared stateof-the-art systems on the built dataset. As for English, the entity linking task for Italian tweets turn out to be quite difficult, as pointed out by the very low performance of all systems employed. As future work we plan to extend the dataset in order to provide more examples for training and testing data.
| 7
|
Lexical and Semantic Resources and Analysis
|
90_2015
| 2,015
|
Luigi Di Caro, Guido Boella, Alice Ruggeri, Loredana Cupi, Adebayo Kolawole, Livio Robaldo
|
From a Lexical to a Semantic Distributional Hypothesis
|
ENG
| 6
| 2
| 0
|
Università di Torino, University of Luxembourg, Università di Bologna
| 3
| 1
| 0
| 1
|
Adebayo Kolawole
| 0
|
0
|
Luxembourg, Italy
|
Esch-sur-Alzette, Turin, Bologna
|
Distributional Semantics is based on the idea of extracting semantic information from lexical information in (multilingual) corpora using statistical algorithms. This paper presents the challenging aim of the SemBurst research project which applies distributional methods not only to words, but to sets of semantic information taken from existing semantic resources and associated with words in syntactic contexts. The idea is to inject semantics into vector space models to find correlations between statements (rather than between words). The proposal may have strong impact on key applications such as Word Sense Disambiguation, Textual Entailment, and others.
|
One of the main current research frontiers in Computational Linguistics is represented by studies and techniques usually associated with the label “Distributional Semantics” (DS), which are focused on the exploitation of distributional analyses of words in syntactic compositions. Their importance is demonstrated by recent ERC projects (COMPOSES and DisCoTex) and by a growing research interest in the scientific community. The proposal presented in this paper is about going far beyond this state of the art. DS uses traditional Data Mining (DM) techniques on text, considering language as a grammar-based type of data, instead of simple unstructured sequences of tokens. It quantifies semantic (in truth lexical) similarities between linguistically-refined tokens (words, lemmas, parts-of-speech, etc.), based on their distributional properties in large corpora. DM relies on Vector Space Models (VSMs), a representation of textual information as vectors of numeric values AUTHOR. DM techniques such as Latent Semantic Analysis (LSA) have been successfully applied to text for information indexing and extraction tasks, using matrix decompositions such as Singular Value Decomposition (SVD) to reconstruct the latent structure behind the distributional hypothesis AUTHOR. It usually works by evaluating the relatedness of different terms, forming word clusters sharing similar contexts. Explicit Semantic Analysis (ESA) AUTHOR and Salient Semantic Analysis (SSA) AUTHOR revisits these methods in the way they define the conceptual layer. With LSA a word's hidden concept is based on its surrounding words, with ESA it is based on Wikipedia entries, and with SSA it is based on hyperlinked words in Wikipedia entries. These approaches represent only a partial step towards the use of semantic information as input for Distributional Analysis. While distributional representations excel at modelling lexical semantic phenomena such as word similarity and categorization (conceptual aspect), Formal Semantics in Computational Linguistics focuses on the representation of the meaning in a set theoretic way (functional aspect), providing a systematic treatment of compositionality and reasoning. Recent interest in the combination of Formal Semantics and Distributional Semantics have been proposed AUTHOR AUTHOR AUTHOR, that employ approaches based on the lexical level. However, 1) the problem of compositionality of lexical distributional vectors is still open and the proposed solutions are limited to combination of vectors, 2) reasoning on classic distributional representations is not possible, since they are VSMs at the lexical level only, 3) the connection of DS with traditional Formal Semantics is not straightforward AUTHOR AUTHOR since DS is limited to a semantics of similarity which is able to support retrieval but not other aspects such as reasoning; and 4) DS does not scale up to phrases and sentences due to data sparseness and growth in model size AUTHOR, restraining the use of tensors. Distributional Hypothesis
|
This paper presents a recently-funded project on a research frontier in Computational Linguistic. It includes a brief survey on the topic and the essential keys of the proposal with its impact.
| 22
|
Distributional Semantics
|
91_2015
| 2,015
|
Elena Cabrio, Serena Villata
|
Inconsistencies Detection in Bipolar Entailment Graphs
|
ENG
| 2
| 2
| 1
|
CNRS, University of Nice Sophia Antipolis
| 2
| 1
| 1
| 2
|
Elena Cabrio, Serena Villata
| 0
|
0
|
France
|
Paris, Nice
|
In the latest years, a number of real world applications have underlined the need to move from Textual Entailment (TE) pairs to TE graphs where pairs are no more independent. Moving from single pairs to a graph has the advantage of providing an overall view of the issue discussed in the text, but this may lead to possible inconsistencies due to the combination of the TE pairs into a unique graph. In this paper, we adopt argumentation theory to support human annotators in detecting the possible sources of inconsistencies.
|
A Textual Entailment (TE) system (Dagan et al., 2009) automatically assigns to independent pairs of two textual fragments either an entailment or a contradiction relation. However, in some real world scenarios like analyzing costumer reviews about a service or product, these pairs cannot be considered as independent. For instance, all the reviews about a certain service need to be collected into a single graph, to understand the overall problems/merits of the service. The combination of TE pairs into a unique graph may generate inconsistencies due to the wrong relation assignment by the TE system, which could not have been identified if TE pairs were considered independently. The detection of such inconsistencies is usually left to human annotators, which later correct them. The need of processing such graphs to support annotators is therefore of crucial importance, particularly when dealing with big amounts of data. Our research question is How to support annotators in detecting inconsistencies in TE graphs? The term entailment graph has been introduced by (Berant et al., 2010) as a structure to model entailment relations between propositional templates. Differently, in this paper we consider bipolar entailment graphs (BEGs), where two kinds of edges are considered, i.e., entailment and contradiction, to reason over the graph consistency. We answer the research question by adopting abstract argumentation theory (Dung, 1995), a reasoning framework used to detect and solve inconsistencies in the so-called argumentation graphs, where nodes are called arguments, and edges represent a conflict relation. Argumentation semantics allows to compute consistent sets of arguments, given the conflicts among them. We define the BEGincs (BEG-Inconsistencies) framework, which translates a BEG into an argumentation graph. It then provides to the annotators sets of arguments, following argumentation semantics, that are supposed to be consistent. If it is not the case, the TE system wrongly assigned some relations. Moving from single pairs to an overall graph allows for the detection of inconsistencies otherwise undiscovered. BEGincs does not identify the precise relation causing the inconsistency, but providing annotators with the consistent arguments sets, they are supported in narrowing the causes of inconsistency.
|
We have presented BEGincs, a new formal framework that, translating a BEG into an argumentation graph, returns inconsistent set of arguments, if a wrong relation assignment by the TE system occurred. These inconsistent arguments sets are then used by annotators to detect the presence of a wrong assignment, and if so, to narrow the set of possibly erroneous relations. If no mistakes are produced in relation assignment, by definition BEGincs semantics return consistent arguments sets. Assuming that in several real world scenarios TE pairs are interconnected, we ask to the NLP community to contribute in the effort of building suitable resources. In BEGincs, we plan to verify and ensure transitivity of BEGs.
| 7
|
Lexical and Semantic Resources and Analysis
|
92_2015
| 2,015
|
Alessandro Mazzei
|
Generare messaggi persuasivi per una dieta salutare
|
ITA
| 1
| 0
| 0
|
Università di Torino
| 1
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Turin
|
In this work we consider the possibility of automatically generating persuasive messages for users to follow a healthy diet. After describing a simple architecture for generating template-based messages, we consider the relationship between message design and some persuasion theories.
|
MADIMAN (Multimedia Application for DIet Management) is a project that explores the possibility of applying artificial intelligence in the context of diet. The design idea is to create a virtual dietist that helps people to follow a healthy diet. Using the ubiquity of mobile devices you want to build an artificial intelligence system that allows (1) to recover, analyze, preserve the nutritional values of a specific recipe, (2) to control its compatibility with the diet that you are following and (3) to persuade the user to make the best choice compared to this diet. In the hypothetical application scenario, the interaction between man and food is mediated by an artificial system that, based on various factors, encourages or discourages the user to eat a specific dish. The factors that the system should consider are: the diet that you intend to follow, the food that has been eaten in the previous days, the nutritional values of the specific dish that you want to choose. The application architecture that the project wants to realize is a web services system (Fig. 1) that interacts with the user through an APP (Fig. 1-1.), analyze the contents of a specific recipe by means of an information retrieved form (Fig. 01-3.), reasons by means of an automatic reasoning module (Fig. 01-4.) and, on the basis of the reasoning, generate a persuasive message to convince the user to make the best choice using an automatic natural language generation module (NLG, Fig. This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union. The automatic treatment of the language comes into play in the analysis phase of the AUTHOR recipe, as well as in the generation of the persuasive message. In particular, the message generation system must use the reasoning output as input. At the present state of development of the project, the reasoner is a system based on the theory of Simple Temporary Problems, which produces a constant (I1, I2, C1, C2, C3. I: incompatible, C: compatible) together with a simple explanation of the result (IPO vs. IPER) AUTHOR. For example, at the end of the argument, a dish may be incompatible with a diet because it has too low protein values (I1+IPO) or too high (I1+IPER); or a dish may be compatible although hyper-protein (C2+IPER) but, if chosen, it will have to be balanced by choosing in the future hypo-protein dishes. The present work is structured as follows: in the sec section::nlg will describe the language generation module, in the sec section::persuasion will review some theories of persuasion that inspired the design of the generation module, while in the sec section::conclusion will conclude the work with some considerations on the state of the project and future developments.
|
In this work we have described the main characteristics of a system of generating messages with persuasive intents in the context of diet. Currently, in order to be able to quantitatively verify the goodness of the proposed approach, we plan to follow two separate experimental methods. Initially, we are running a simulation that takes into account the various factors that affect the success of our system. On the one hand, it is necessary to shape the user's propensity to be persuaded, on the other hand, it is necessary to consider sensible numerical values to shape the diet and the dishes. If the simulation gives promising results, then we intend to evaluate the system in a realistic medical trial. Following the evaluation model proposed by Reiter for the STOP system (Reiter et al., 2003), we intend to test the system through control groups in a specific medical context, namely that of clinics to treat essential obesity.
| 3
|
Chatbots and Dialogue Systems
|
93_2015
| 2,015
|
Michele Filannino, Marilena Di Bari
|
Gold standard vs. silver standard: the case of dependency parsing for Italian
|
ENG
| 2
| 1
| 0
|
University of Manchester, University of Leeds
| 2
| 1
| 1
| 2
|
Michele Filannino, Marilena Di Bari
| 0
|
0
|
United Kingdom
|
Manchester, Leeds
|
Collecting and manually annotating gold standards in NLP has become so expensive that in the last years the question of whether we can satisfactorily replace them with automatically annotated data (silver standards) is arising more and more interest. We focus on the case of dependency parsing for Italian and we investigate whether such strategy is convenient and to what extent. Our experiments, conducted on very large sizes of silver data, show that quantity does not win over quality.
|
Collecting and manually annotating linguistic data (typically referred to as gold standard) is a very expensive activity, both in terms of time and effort AUTHOR. For this reason, in the last years the question of whether we can train good Natural Language Processing (NLP) models by using just automatically annotated data (called silver standard) is arising interest AUTHOR. In this case, human annotations are replaced by those generated by pre-existing state-of-the-art systems. The annotations are then merged by using a committee approach specifically tailored on the data AUTHOR. The key advantage of such approach is the possibility to drastically reduce both time and effort, therefore generating considerably larger data sets in a fraction of the time. This is particularly true for text data in different fields such as temporal information extraction AUTHOR, text chunking AUTHOR and named entity recognition AUTHOR to cite just a few, and for non-textual data like in in medical imaging recognition AUTHOR. In this paper we focus on the case of dependency parsing for the Italian language. Dependency parsers are systems that automatically generate the linguistic dependency structure of a given sentence AUTHOR. An example is given in Figure~fig:dp_example_italian for the sentence ``Essenziale per l'innesco delle reazioni è la presenza di radiazione solare.'' (The presence of solar radiation is essential for triggering the reactions). We investigate whether the use of very large silver standard corpora leads to train good dependency parsers, in order to address the following question: Which characteristic is more important for a training set: quantity or quality? The paper is organised as follows: Section 2 presents some background works on dependency parsers for Italian; Section 3 presents the silver standard corpus used for the experiments and its linguistic features, with Section 4 describing the experimental settings and Section 5 describing the results of the comparison between the trained parsers (considering different sizes of data) and two test sets: gold and silver. Finally, the paper's contributions are summed up in Section 6.
|
We presented a set of experiments to investigate the contribution of silver standards when used as substitution of gold standard data. Similar investigations are arising interesting in any NLP subcommunities due to the high cost of generating gold data. The results presented in this paper highlight two important facts: Figure 2: Parsers’ performance against silver and gold test sets. In both cases, the models exhibit an asymptotic behaviour. Figures are presented in Table 2. Silver data sizes express the number of sentences. ‘K’ stands for 1.000. i) The size increase of the training corpus does not provide any sensible difference in terms of performance. In both test sets, a number of sentences between 5.000 and 10.000 seem to be enough to obtain a reliable training. We note that the size of the EVALITA training set lies in such boundary. ii) The annotations between gold and silver corpora may be different. This is suggested by the fact that none of the parsers achieved a satisfactory performance when trained and tested on different sources. We also note that the gold and silver test data sets have different characteristics (average sentence length, lexicon and type of annotation), which may partially justify the gap. On the other hand, the fact that a parser re-trained on annotations produced by a state-of-the-art system (DeSR) in the EVALITA task performs poorly on the very same gold set sheds light on the possibility that such official benchmark test set may not be representative enough. The main limitation of this study lays in the fact that the experiments have not been repeated multiple times, therefore we have no information about the variance of the figures (UAS column in Table 2). On the other hand, the large size of the data sets involved and the absence of any outlier figure suggest that the overall trends should not change. With the computational facilities available to us for this research, a full analysis of that sort would have required years to be completed. The results presented in the paper shed light on a recent research question about the employability of automatically annotated data. In the context of dependency parsing for Italian, we provided evidences to support the fact that the quality of the annotation is a far better characteristic to take into account when compared to quantity. A similar study on languages other than Italian would constitute an interesting future work of the research hereby presented.
| 4
|
Syntax and Dependency Treebanks
|
94_2015
| 2,015
|
Alessia Barbagli, Pietro Lucisano, Felice Dell'Orletta, Simonetta Montemagni, Giulia Venturi
|
CItA: un Corpus di Produzioni Scritte di Apprendenti l'Italiano L1 Annotato con Errori
|
ITA
| 5
| 3
| 1
|
Sapienza Università di Roma, CNR-ILC
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Rome, Pisa
|
In this article we present CITA the first corpus of written productions of Litalian L1 learners of the first and second year of the first grade secondary school noted with grammatical, spelling and lexical errors. The specificities of the corpus and its diachronistic nature make it particularly useful both for linguistic-computational applications and for socio-pedagogical studies.
|
The construction of a body of learning productions has always been at the heart of the research activities of the computational linguistic community. Particular attention is paid to the annotation and classification of the mistakes made by learners. Corporate noted with this type of information are typically used for the study and creation of models for the development of writing skills (Deane and Quinlan, 2010) and for the development of systems to support teaching (the so-called Intelligent Computers) (Granger, 2003). In this scenario, a particular interest is dedicated to the collection and annotation of the body of productions of L2 learners employed as a starting point for studies on the development of the interlanguage, for reflection activities on the possible modification and/or customization of the teaching action of the teacher and for the development of systems of automatic correction of errors. Most of the activities concerned the construction of a L2 learning corps, among which the most recent and the largest is the NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), used as a reference resource in 2013 and 2014 of the "Shared Task on Grammatical Error Correction" (Ng et al., 2013; Ng et al., 2014). However, in recent years, the attention has also been directed to L2 other than English, such as the Arabic (Zaghouani et al., 2015), the German (Ludeling et al., 2005), the Hungarian (Dickinson and Ledbetter, 2012), the Basque (Aldabe et al., 2005) and the Czech and Italian (Andorno and Rastelli, 2009; Boyd et al., 2014). Less attention has been paid to the construction of resources consisting of L1 learning productions. An exception is represented by the KoKo corpus (Abel et al., 2014), collection of productions of German learners L1 of the last year of the second grade secondary school enriched with background information of the learners (e.g. age, gender, socio-economic situation), manual recording of spelling and grammatical errors, and linguistic information recorded automatically. By placing ourselves in this last scenario, in this article we present CITA (Corpus Italiano di Apprenti L1) the first corpus of written productions of Litaliano L1 learners recorded manually with different types of errors and with its correction. The corpus, composed of productions of the first two years of the first grade secondary school, is to our knowledge not only the first Italian corpus of this type but contains characteristics of novelties that make it unique even within the international panorama of research.
|
CitA can also be used in socio-pedagogic studies, allowing the distribution of errors to be related to the background variables. It is thus possible to verify the extent to which changes in writing are attributable to background socio-economic conditions. For example, it is interesting to note how the statistical surveys conducted have revealed that the decrease in the incorrect use of vocabulary from the first to the second year is significantly related to the habit of reading. Or you can study how grammatical errors vary statistically significantly compared to the placement of the school in the center or outskirts of Rome: while in the center schools the errors decrease in the transition from the first to the second year, in two of the four schools on the outskirts increase. Different is the case of spelling errors that do not vary statistically significantly from the background variables considered. This would confirm some studies (Colombo, 2011; Ferreri, 1971; Lavino, 1975; De Mauro, 1977) where it is stated that spelling correctness is a skill that is acquired over time because it requires the sedimentation of norms, often arbitrary, that establish not causal bonds between sound and graphics.
| 8
|
Learner Corpora and Language Acquisition
|
95_2015
| 2,015
|
Johanna Monti, Federico Sangati, Mihael Arcan
|
TED-MWE: a bilingual parallel corpus with MWE annotation. Towards a methodology for annotating MWEs in parallel multilingual corpora
|
ENG
| 3
| 1
| 1
|
Università di Sassari, Fondazione Bruno Kessler, National University of Ireland
| 3
| 1
| 0
| 1
|
Mihael Arcan
| 0
|
0
|
Ireland, Italy
|
Dublin, Sassari, Trento
|
The translation of Multiword expressions (MWE) by Machine Translation (MT) represents a big challenge, and although MT has considerably improved in recent years, MWE mistranslations still occur very frequently. There is the need to develop large data sets, mainly parallel corpora, annotated with MWEs, since they are useful both for SMT training purposes and MWE translation quality evaluation. This paper describes a methodology to annotate a parallel spoken corpus with MWEs. The dataset used for this experiment is an English-Italian corpus extracted from the TED spoken corpus and complemented by an SMT output.
|
Multiword expressions (MWEs) represent one of the major challenges for all Natural Language Processing (NLP) applications and in particular for Machine Translation (MT) (Sag et al., 2002). The notion of MWE includes a wide and frequent set of different lexical phenomena with their specific properties, such as idioms, compound words, domain specific terms, collocations, Named Entities or acronyms. Their morpho-syntactic, semantic and pragmatic idiomaticity (Baldwin and Kim, 2010) together with translational asymmetries (Monti and Todirascu, 2015), i.e. the differences between an MWE in the source language and its translation, prevent technologies from using systematic criteria for properly handling MWEs. For this reason their automatic identification, extraction and translation are very difficult tasks. Recent PARSEME surveys1 have highlighted that there is lack of MWE-annotated resources, and in particular parallel corpora. Moreover, the few available ones are usually limited to the study of specific MWE types and specific language pairs. The focus of our research work is therefore to provide a methodology for annotating a parallel corpus with all MWEs (with no restrictions to a specific type) which can be used both for training and testing SMT systems. We have refined this methodology while developing the English-Italian MWE-TED corpus, which contains 1.5K sentences and 31K EN tokens.It is a subset of the TED spoken corpus annotated with all the MWEs detected during the annotation process. This contribution presents the corpus2 together with the annotation guidelines in section 3, the annotation process in section 4 and the MWE annotation statistics in section 5.
|
We have described the TED-MWE corpus, an English-Italian parallel spoken corpus annotated with MWEs, together with the methodology and the guidelines adopted during the annotation process. Ongoing and future work includes refinement of the annotation tools and guidelines, the extension of the methodology to further languages in order to develop a multilingual MWE-TED corpus. The main aim is to provide useful data both for SMT training purposes and MT quality evaluation.
| 7
|
Lexical and Semantic Resources and Analysis
|
96_2015
| 2,015
|
Anne-Lyse Minard, Manuela Speranza, Rachele Sprugnoli, Tommaso Caselli
|
FacTA: Evaluation of Event Factuality and Temporal Anchoring
|
ENG
| 4
| 3
| 1
|
Fondazione Bruno Kessler, Università di Trento, VU Amsterdam
| 3
| 1
| 0
| 1
|
Tommaso Caselli
| 0
|
0
|
Netherlands, Italy
|
Amsterdam, Trento
|
In this paper we describe FacTA, a new task connecting the evaluation of factuality profiling and temporal anchoring, two strictly related aspects in event processing. The proposed task aims at providing a complete evaluation framework for factuality profiling, at taking the first steps in the direction of narrative container evaluation for Italian, and at making available benchmark data for high-level semantic tasks.
|
Reasoning about events plays a fundamental role in text understanding; it involves different aspects, such as event identification and classification, temporal anchoring of events, temporal ordering, and event factuality profiling. In view of the next EVALITA edition (Attardi et al., 2015),1 we propose FacTA (Factuality and Temporal Anchoring), the first task comprising the evaluation of both factuality profiling and temporal anchoring, two strictly interrelated aspects of event interpretation. Event factuality is defined in the literature as the level of committed belief expressed by relevant sources towards the factual status of events mentioned in texts (Saurı́ and Pustejovsky, 2012). The notion of factuality is closely connected to other notions thoroughly explored by previous research conducted in the NLP field, such as subjectivity, belief, hedging and modality; see, among others, (Wiebe et al., 2004; Prabhakaran et al., 2010; Medlock and Briscoe, 2007; Saurı et al., 2006). More specifically, the factuality status of events is related to their degree of certainty (from absolutely certain to uncertain) and to their polarity (affirmed vs. negated). These two aspects are taken into consideration in the factuality annotation frameworks proposed by Saurı́ and Pustejovsky (2012) and van Son et al. (2014), which inspired the definition of factuality profiling in FacTA. Temporal anchoring consists of associating all temporally grounded events to time anchors, i.e. temporal expressions, through a set of temporal links. The TimeML annotation framework (Pustejovsky et al., 2005) addresses this issue through the specifications for temporal relation (TLINK) annotation, which also implies the ordering of events and temporal expressions with respect to one another. Far from being a trivial task (see systems performance in English (UzZaman et al., 2013) and in Italian (Mirza and Minard, 2014)), TLINK annotation requires the comprehension of complex temporal structures; moreover, the number of possible TLINKs grows together with the number of annotated events and temporal expressions. Pustejovsky and Stubbs (2011) introduced the notion of narrative container with the aim of reducing the number of TLINKs to be identified in a text while improving informativeness and accuracy. A narrative container is a temporal expression or an event explicitly mentioned in the text into which other events temporally fall (Styler IV et al., 2014). The use of narrative containers proved to be useful to accurately place events on timelines in the domain of clinical narratives (Miller et al., 2013). Temporal anchoring in FacTA moves in the direction of this notion of narrative container by fo187cusing on specific types of temporal relations that link an event to the temporal expression to which it is anchored. However, anchoring events in time is strictly dependent of their factuality profiling. For instance, counterfactual events will never have a temporal anchor or be part of a temporal relation (i.e. they never occurred); this may not hold for speculated events, whose association with a temporal anchor or participation in a temporal relation is important to monitor future event outcomes.
|
The FacTA task connects two related aspects of events: factuality and temporal anchoring. The availability of this information for Italian will both promote research in these areas and fill a gap with respect to other languages, such as English, for a variety of semantic tasks. Factuality profiling is a challenging task aimed at identifying the speaker/writers degree of commitment to the events being referred to in a text. Having access to this type of information plays a crucial role for distinguishing relevant and nonrelevant information for more complex tasks such as textual entailment, question answering, and temporal processing. On the other hand, anchoring events in time requires to interpret temporal information which is not often explicitly provided in texts. The identification of the correct temporal anchor facilitates the organization of events in groups of narrative containers which could be further used to improve the identification and classification of in-document and cross-document temporal relations. The new annotation layers will be added on top of an existing dataset, the EVENTI corpus, thus allowing to re-use existing resources and to promote the development of multi-layered annotated corpora; moreover a new linguistic resource, WItaC, will be provided. The availability of these data is to be considered strategic as it will help the study the interactions of different language phenomena and enhance the development of more robust systems for automatic access to the content of texts. The use of well structured annotation guidelines grounded both on official and de facto standards is a stimulus for the development of multilingual approaches and promote discussions and reflections in the NLP community at large. Considering the success of evaluation campaigns such as Clinical TempEval at SemEval 2015 and given the presence of an active community focused on extra-propositional aspects of meanings (e.g. attribution11 ), making available new annotated data in the framework of an evaluation campaign for a language other than English can have a large impact in the NLP community.
| 7
|
Lexical and Semantic Resources and Analysis
|
97_2015
| 2,015
|
Giorgio Guzzetta, Federico Nanni
|
Computing, memory and writing: some reflections on an early experiment in digital literary studies
|
ENG
| 2
| 0
| 0
|
University College Cork, Università di Bologna
| 2
| 1
| 0
| 1
|
Giorgio Guzzetta
| 0
|
0
|
Ireland, Italy
|
Cork, Bologna
|
In this paper we present the first steps of a research that aims at investigating the possible relationship between the emergence of a discontinuity in the study of poetic influence and some early experiments of humanities computing. The background idea is that the evolution of computing in the 1960s and 1970s might have triggered a transformation of the notion of influence and that today's interactions between the field of natural language processing and digital humanities are still sustaining it. In order to do so, we studied a specific interdisciplinary project dedicated to the topic and we reproduced those experiments. Then we compared the results with different text mining techniques in order to understand how contemporary methods can deal with the rethinking of the notion of influence.
|
In recent years, when the institutional presence of digital humanities grew stronger, the need for a closer look at the history of humanities computing (the name under which this community of researchers was originally gathered) became more urgent, and some answers have been attempted AUTHOR. Rehearsing the history of humanities computing proved to be a challenging task, because of the hybrid and interdisciplinary nature of it and because of its entanglement up with a field like computing, which is epistemologically unstable and interdisciplinary as well AUTHOR.\newline This paper is a contribution to this attempt to develop an history of humanities computing, trying to combine together the histories of computing, of computational linguistics, and of literary studies. We present here the first steps of a research that aims at investigating the possibility that the emergence of a ``discontinuity'' in the notion of literary influence (and, consequently, of literary source), or at least the critical awareness of it, might be related to some early experiments of humanities computing. The background idea is that the evolution of computing in the 1960s and 1970s might have contributed to this transformation of the notion of influence, and that today's interactions between the field of natural language processing and digital humanities might have further developed it, possibly in new directions.\newline In order to do so, the paper is organised as follows. First of all, we define the problem of influence from the point of view of literary studies and literary theory. With that background in mind, we study a specific interdisciplinary project dedicated to the topic AUTHOR and we analysed Raben key-role on AUTHOR computational analysis. Moreover, to get a complete understanding of the methods applied in this work, we recreated the same approach with a script in Python.\newline Then, we decided to update Goodman and Villani's approach by adopting contemporary text mining methods. While conducting this specific task our purpose was not only to compare the different techniques and to highlight possible mistakes in Goodman and Villani's approach, but we aimed especially at understanding whether contemporary methods commonly adopted in natural language processing and digital humanities are able to answer questions and solve problematic issues emerged during the rethinking of the notion of influence in recent years.
|
To conclude, for what concerns more specifically the possible boundary between humanities computing and the study of influence in literary studies, we believe that computational techniques helped to develop a keen sense of the issues involved, foregrounding the role of mechanical reading in de-idealising the problem. The use of machines in turn triggered Bloom’s creation of his own “machine for criticism” (Bloom, 1976) built in order to understand how the anxiety of influence was dealt with by writers. If it’s true that Bloom’s theory “anxiously responds to [...] the subliminally perceived threat of textual anonymity promoted by the ’mechanical reproduction’ of available texts” (Renza, 1990), that is to say something that with the digital will soon reach an entirely new, we can consider Raben’s work as moved by a similar tension. Compared to Bloom, however, the advantage of the latter was that he was working from within, so to speak, the machine language. For this reason, he was able to begin transforming the traditional approach, that emphasised continuity within literary traditions, with a different one more focused on the way in which creative writing works, and on what the role of linguistic memory and of “creative” reading (or, as Prose (2006) said, reading as a writer).
| 9
|
Textual Genres & Literature Linguistics
|
98_2015
| 2,015
|
Marilena Di Bari, Serge Sharoff, Martin Thomas
|
A manually-annotated Italian corpus for fine-grained sentiment analysis
|
ENG
| 3
| 1
| 1
|
University of Leeds
| 1
| 1
| 1
| 3
|
Marilena Di Bari, Serge Sharoff, Martin Thomas
| 0
|
0
|
United Kingdom
|
Leeds
|
This paper presents the results of the annotation carried out on the Italian section of the SentiML corpus, consisting of both originally-produced and translated texts of different types. The two main advantages are that: (i) the work relies on the linguistically-motivated assumption that, by encapsulating opinions in pairs (called appraisal groups), it is possible to annotate (and automatically extract) their sentiment in context; (ii) it is possible to compare Italian to its English and Russian counterparts, as well as to extend the annotation to other languages.
|
Overall, the field of Sentiment Analysis (SA) aims at automatically classifying opinions as positive, negative or neutral AUTHOR. While at first the focus of SA was on the document level (coarse-grained) classification, with the years it has become more and more at the sentence level or below the sentence (fine-grained). This shift has been due to both linguistic and application reasons. Linguistic reasons arise because sentiment is often expressed over specific entities rather than an overall document. As for practical reasons, SA tasks are often aimed at discriminating between more specific aspects of these entities. For example, if an opinion is supposed to be on the plot of a movie, it is not unusual that the user also evaluates actors' performance or director's choices AUTHOR. For SA applications these opinions need to be assessed separately. Also opinions are not expressed as simple and direct assertions, but by using a number of stylistic devices such as pronominal references, abbreviations, idioms and metaphors. Finally, the automatic identification of sarcasm, irony and humour is even more challenging AUTHOR. For all these reasons, fine-grained sentiment analysis is looking at entities that are usually chains of words such as ``noun+verb+adjective'' (e.g. the house is beautiful) or ``adverb+adjective+noun'' (e.g. very nice car) AUTHOR. In addition to the multitude of approaches to fine-grained SA, there is also shortage of multilingual comparable studies and available resources. To close this gap, we designed the SentiML annotation scheme AUTHOR and applied it to texts in three languages, English, Italian and Russian. The proposed annotation scheme extends previous works AUTHOR and allows multi-level annotations of three categories: target (T) (expression the sentiment refers to), modifier (M) (expression conveying the sentiment) and appraisal group (AG) (couple of modifier and target). For example in: “Gli uomini hanno il potere di sradicare la [povertà]T ]AG, ma anche di [[sradicare]M le 105[tradizioni]T ]AG ”. (Men have the power to eradicate poverty, but also to eradicate traditions) \noindent the groups ``sradicare povertà'' (eradicate poverty) and ``sradicare tradizioni'' (eradicate traditions) have an opposite sentiment despite including the same word sradicare (to eradicate). This scheme has been developed in order to facilitate the annotation of the sentiment and other advanced linguistic features that contribute to it, but also the appraisal type according to the Appraisal Framework (AF) AUTHOR in a multilingual perspective (Italian, English and Russian). The AF is the development of the Systemic Functional Linguistics AUTHOR specifically concerned with the study of the language of evaluation, attitude and emotion. It consists of attitude, engagement and graduation. Of these, attitude is sub-divided into affect, which deals with personal emotions and opinions (e.g. excited, lucky); judgement, which concerns author's attitude towards people's behaviour (e.g. nasty, blame); appreciation, which considers the evaluation of things (e.g. unsuitable, comfortable). The engagement system considers the positioning of oneself with respect to the opinions of others, while graduation investigates how the use of language amplifies or diminishes attitude and engagement. In particular, force is related to intensity, quantity and temporality. To the best of our knowledge the AF has only been applied in the case of Italian for purposes not related to computation AUTHOR. This paper is organized as follows: Section 2 describes the annotation scheme and the annotated Italian corpus, Section 3 reports the results and finally Section 4 our conclusions.
|
In this paper we have described a manuallyannotated corpus of Italian for fine-grained sentiment analysis. The manual annotation has been done in order to include important linguistic features. Apart from extracting statistics related to the annotations, we have also compared the manual annotations to a sentiment dictionary and demonstrated that (i) the dictionary includes only 29.29% of the annotated words, and (ii) the prior orientation given in the dictionary is different from the correct one given by the context in 28.18% of the cases. The original and annotated texts in Italian (along with English and Russian) and the Document Type Definition (DTD) of SentiML to be used with MAE are publicly available. In the meanwhile, the authors are already working on an automatic system to identify and classify appraisal groups multilingually.
| 6
|
Sentiment, Emotion, Irony, Hate
|
99_2015
| 2,015
|
Anna Feltracco, Elisabetta Jezek, Bernardo Magnini, Simone Magnolini
|
Annotating opposition among verb senses: a crowdsourcing experiment
|
ENG
| 4
| 2
| 1
|
Fondazione Bruno Kessler, Università di Pavia, Università di Brescia
| 3
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento, Pavia, Brescia
|
We describe the acquisition, based on crowdsourcing, of opposition relations among Italian verb senses in the T-PAS resource. The annotation suggests the feasibility of a large-scale enrichment.
|
Several studies have been carried out on the definition of opposition in linguistics, philosophy, cognitive science and psychology. Our notion of opposition is based on lexical semantic studies by Lyons (1977), Cruse (1986; 2002; 2011), and Pustejovsky (2000), as synthesized in Jezek (2015). The category of opposites can be said to include pairs of terms that contrast each other with respect to one key aspect of their meaning, such that together they exhaust this aspect completely. Examples include the following pairs: to open / to close, to rise / to fall. Paradoxically, the first step in the process of identifying a relationship of opposition often consists in identifying something that the meanings of the words under examination have in common. A second step is to identify a key aspect in which the two meanings oppose each other1 . Opposites cannot be true simultaneously of the same entity, for example a price cannot be said to rise and to fall at exactly the same point in time. It is an open discussion whether opposition is a semantic or a lexical relation (Murphy, 2010; Fellbaum, 1998); what is clear is that the predicate that is considered opposite of another predicate does not activate this relation for all its senses. For example, the Italian verb abbattere is considered opposite to costruire as far as the former is considered in its sense of to destroy (a building), and the latter in its sense of to build (a building). The opposition relation does not hold if abbattere is considered in its sense of to kill (an animal). Oppositions between verbs senses are poorly encoded in lexical resources. English WordNet 3.1 (Miller et al., 1990) tags oppositions among verb senses using the label antonymy; for example, increase#1 is in antonymy relation with decrease#1, diminish#1, lessen#1, fall#11. In VerbOcean (Chklovski and Pantel, 2004), opposition (antonymy) is considered a symmetric relation between verbs, which includes several subtypes; the relation is extracted at verb level (not at sense level). FrameNet (Ruppenhofer et al., 2010), on the other hand, has no tag for the opposition relation, although a subset of cases can be traced via the perspective on relation. As regards Italian, in MultiWordNet (Pianta et al., 2002) the opposition relation (labeled: antonymy relation) is considered a lexical relation and is represented in the currently available version for English, but not for Italian. In SIMPLE (Lenci et al., 2000), the opposition relation (antonymy) is considered a relation between word senses and it has been defined for adjectives (e.g., dead/alive and hot/cold), although the authors specify it can possibly be extended also to other parts of speech. In Senso Comune (Oltramari et al., 2013) the annotation of the opposition relation appears not to be implemented, even if the tag for the relation (antonimia) is present. The experiment described in the paper focuses on the annotation of opposition relations among verb senses. We annotate these relations in the lexical resource T-PAS (Jezek et al., 2014), an inventory of typed predicate argument structures for Italian manually acquired from corpora through inspection and annotation of actual uses of the analyzed verbs. The corpus instances associated to each T-PAS represent a rich set of grounded information not available in other resources and facilitate the interpretation of the different senses of the verbs 2 . We collected data using crowdsourcing, a methodology already used in other NLP tasks, such as Frame Semantic Role annotation (Fossati et al., 2013; Feizabadi and Padó, 2014), Lexical Substitution (Kremer et al., 2014), Contradictory Event Pairs Acquisition (Takabatake et al., 2015).
|
In this paper we have presented a crowdsourcing experiment for the annotation of the opposition relation among verb senses in the Italian T-PAS resource. The annotation experiment has shown the feasibility of collecting opposition relation among Italian verb senses through crowdsourcing. We propose a methodology based on the automatic substitution of a verb with a candidate opposite and show that the IAA obtained using sense examples is comparable with the IAA obtained by other annotations based on sense definitions. Ongoing work includes further annotation of the opposition relations in T-PAS using crowd answers and a deep examination of the causes which lead to the generation of sentences with no sense.
| 7
|
Lexical and Semantic Resources and Analysis
|
100_2015
| 2,015
|
Giovanni Moretti, Rachele Sprugnoli, Sara Tonelli
|
Digging in the Dirt: Extracting Keyphrases from Texts with KD
|
ENG
| 3
| 2
| 0
|
Fondazione Bruno Kessler, Università di Trento
| 2
| 0
| 0
| 0
|
0
| 0
|
0
|
Italy
|
Trento
|
In this paper we present a keyphrase extraction system called Keyphrase Digger (KD). The tool uses both statistical measures and linguistic information to detect a weighted list of n-grams representing the most important concepts of a text. KD is the reimplementation of an existing tool, which has been extended with new features, a high level of customizability, a shorter processing time and an extensive evaluation on different text genres in English and Italian (i.e. scientific articles and historical texts).
|
This paper presents Keyphrase Digger (henceforth KD), a new implementation of the KX system for keyphrase extraction. Both KX (Pianta and Tonelli, 2010; Tonelli et al., 2012) and KD combine statistical measures with linguistic information to identify and extract weighted keyphrases from English and Italian texts. KX took part to the SemEval 2010 task on “Automatic Keyphrase Extraction from Scientific Articles” (Kim et al., 2010) achieving the 7th best result out of 20 in the final ranking. KX is part of TextPro (Pianta et al., 2008), a suite of NLP tools developed by Fondazione Bruno Kessler 1 . The aim of KX re-implementation was to improve system performance in terms of F-measure, processing speed and customizability, so to make its integration possible in web-based applications. Besides, its adaptation to different types of texts has become possible also for not expert users, and its application to large document collections has been significantly improved. Keyphrases are n-grams of different length, both single and multi-token expressions, which capture the main concepts of a given document (Turney, 2000). Their extraction is very useful when integrated in complex NLP tasks such as text categorization (Hulth and Megyesi, 2006), opinion mining (Berend, 2011) and summarization (D’Avanzo and Magnini, 2005)2 . Moreover keyphrases, especially if displayed using an effective visualization, can help summarize and navigate document collections so to allow their socalled ‘distant reading’ (Moretti, 2013). This need to easily grasp the concise content of a text through keyphrases is particularly relevant given the increasing availability of digital document collections in many domains. Nevertheless, outside the computational linguistics community, for example among humanities scholars, the extraction of keywords is often assimilated with the extraction of the most frequent (single) words in a text, see for instance the success of tools such as Textal3 and Voyant4 in the Digital Humanities community. In some cases, stopwords are still included among the top-ranked keywords, leading to a key-concept list which is little informative and just reflects the Zipfian distribution of words in language. KD, instead, is designed to be easily customized by scholars from different communities, while sticking to the definition of keyphrases in use in the computational linguistics community. The remainder of the paper is structured as follows. In Section 2 we describe KD architecture and features while in Section 3 the system evaluation is reported. We present the application of KX to the historical domain and the web-based interface available online in Section 4. Future works and conclusions are drawn in Section 5.
|
This paper presents KD, a keyphrase extraction system that re-implements the basic algorithm of KX but adds new features, a high level of customizability and an improved processing speed. KD currently works on English and Italian and can take in input texts pre-processed with different available PoS taggers and lemmatizers for these two languages. Nevertheless, the system could be easily adapted to manage more languages and additional PoS taggers by modifying few configuration parameters. KD will be soon integrated in the next TextPro release12 and it will be also released as a standalone module. Meanwhile, we made it available online as part of an easy-to-use web application, so that it can be easily accessed also by users without a technical background. This work targets in particular humanities scholars, who often do not know how to access state-of-the-art tools for keyphrase extraction.
| 7
|
Lexical and Semantic Resources and Analysis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.