Can We Use SE-specific Sentiment Analysis Tools in a Cross-Platform Setting?

In this paper, we address the problem of using sentiment analysis tools 'off-the-shelf', that is when a gold standard is not available for retraining. We evaluate the performance of four SE-specific tools in a cross-platform setting, i.e., on a test set collected from data sources differen...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2020 IEEE/ACM 17th International Conference on Mining Software Repositories (MSR) s. 158 - 168
Hlavní autoři: Novielli, Nicole, Calefato, Fabio, Dongiovanni, Davide, Girardi, Daniela, Lanubile, Filippo
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 01.05.2020
Témata:
ISSN:2574-3864
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, we address the problem of using sentiment analysis tools 'off-the-shelf', that is when a gold standard is not available for retraining. We evaluate the performance of four SE-specific tools in a cross-platform setting, i.e., on a test set collected from data sources different from the one used for training. We find that (i) the lexicon-based tools outperform the supervised approaches retrained in a cross-platform setting and (ii) retraining can be beneficial in within-platform settings in the presence of robust gold standard datasets, even using a minimal training set. Based on our empirical findings, we derive guidelines for reliable use of sentiment analysis tools in software engineering.
ISSN:2574-3864
DOI:10.1145/3379597.3387446