A Comparative Analysis of Article Recommendation Platforms

Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index) are established for this purpose, despite missing ev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) S. 1 - 10
Hauptverfasser: Alchokr, Rand, Kruger, Jacob, Saake, Gunter, Leich, Thomas
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.09.2021
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index) are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms' features are challenging to understand, but they enforce the trend of involving humans instead of metrics for evaluating research.
DOI:10.1109/JCDL52503.2021.00012