A Comparative Analysis of Article Recommendation Platforms
Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index) are established for this purpose, despite missing ev...
Saved in:
| Published in: | 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) pp. 1 - 10 |
|---|---|
| Main Authors: | , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
01.09.2021
|
| Subjects: | |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index) are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms' features are challenging to understand, but they enforce the trend of involving humans instead of metrics for evaluating research. |
|---|---|
| DOI: | 10.1109/JCDL52503.2021.00012 |