Evaluation of stance annotation of Twitter data

Gespeichert in:
Bibliographische Detailangaben
Titel: Evaluation of stance annotation of Twitter data
Autoren: Simaki, Vasiliki, Seitanidi, Eleni, Paradis, Carita
Weitere Verfasser: Lund University, Joint Faculties of Humanities and Theology, Departments, Centre for Languages and Literature, Section 4, Division of English Studies, English Studies, Lunds universitet, Humanistiska och teologiska fakulteterna, Institutioner, Språk- och litteraturcentrum, Sektion 4, Avdelningen för engelska, Engelska, Originator, Lund University, Joint Faculties of Humanities and Theology, Departments, Centre for Languages and Literature, Section 3, Division of Greek Studies, Latin, Modern Greek, and Spanish Studies, Modern Greek, Lunds universitet, Humanistiska och teologiska fakulteterna, Institutioner, Språk- och litteraturcentrum, Sektion 3, Avdelningen för modern grekiska, grekiska, latin och spanska, Modern grekiska, Originator, Lund University, Joint Faculties of Humanities and Theology, Research platforms, HT, LAMiNATE (Language Acquisition, Multilingualism, and Teaching), Lunds universitet, Humanistiska och teologiska fakulteterna, Forskningsplattformar, HT, LAMiNATE (Language Acquisition, Multilingualism, and Teaching), Originator
Quelle: Research in Corpus Linguistics. 11(1):53-80
Schlagwörter: Humanities and the Arts, Languages and Literature, Comparative Language Studies and Linguistics, Humaniora och konst, Språk och litteratur, Jämförande språkvetenskap och allmän lingvistik, Studies of Specific Languages, Studier av enskilda språk
Beschreibung: Taking stance towards any topic, event or idea is a common phenomenon on Twitter and social media in general. Twitter users express their opinions about different matters and assess other people’s opinions in various discursive ways. The identification and analysis of the linguistic ways that people use to take different stances leads to a better understanding of the language and user behaviour on Twitter. Stance is a multidimensional concept involving a broad range of related notions such as modality, evaluation and sentiment. In this study, we annotate data from Twitter using six notional stance categories —contrariety, hypotheticality, necessity, prediction, source of knowledge and uncertainty—­­ following a comprehensive annotation protocol including inter-coder reliability measurements. The relatively low agreement between annotators highlighted the challenges that the task entailed, which made us question the inter-annotator agreement score as a reliable measurement of annotation quality of notional categories. The nature of the data, the difficulty of the stance annotation task and the type of stance categories are discussed, and potential solutions are suggested.
Zugangs-URL: https://doi.org/10.32714/ricl.11.01.03
Datenbank: SwePub