COMPARE: A Taxonomy and Dataset of Comparison Discussions in Peer Reviews

Comparing research papers is a conventional method to demonstrate progress in experimental research. We present COMPARE, a taxonomy and a dataset of comparison discussions in peer reviews of research papers in the domain of experimental deep learning. From a thorough observation of a large set of re...

Full description

Saved in:
Bibliographic Details
Published in:2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) pp. 238 - 241
Main Authors: Singh, Shruti, Singh, Mayank, Goyal, Pawan
Format: Conference Proceeding
Language:English
Published: IEEE 01.09.2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Comparing research papers is a conventional method to demonstrate progress in experimental research. We present COMPARE, a taxonomy and a dataset of comparison discussions in peer reviews of research papers in the domain of experimental deep learning. From a thorough observation of a large set of review sentences, we build a taxonomy of categories in comparison discussions and present a detailed annotation scheme to analyze this. Overall, we annotate 117 reviews covering 1,800 sentences. We experiment with various methods to identify comparison sentences in peer reviews and report a maximum F1 Score of 0.49. We also pretrain two language models specifically on ML, NLP, and CV paper abstracts and reviews to learn informative representations of peer reviews. The annotated dataset and the pretrained models are available at https://github.com/shruti-singh/COMPARE.
DOI:10.1109/JCDL52503.2021.00068