The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review
Saved in:
| Title: | The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review |
|---|---|
| Authors: | Buxin Su, Jiayao Zhang, Natalie Collina, Yuling Yan, Didong Li, Kyunghyun Cho, Jianqing Fan, Aaron Roth, Weijie Su |
| Source: | Journal of the American Statistical Association. :1-12 |
| Publication Status: | Preprint |
| Publisher Information: | Informa UK Limited, 2025. |
| Publication Year: | 2025 |
| Subject Terms: | Computer Science and Game Theory, Machine Learning, FOS: Computer and information sciences, Digital Libraries, Applications, Applications (stat.AP), Digital Libraries (cs.DL), Machine Learning (stat.ML), Computer Science and Game Theory (cs.GT), Machine Learning (cs.LG) |
| Description: | We conducted an experiment during the review process of the 2023 International Conference on Machine Learning (ICML), asking authors with multiple submissions to rank their papers based on perceived quality. In total, we received 1,342 rankings, each from a different author, covering 2,592 submissions. In this paper, we present an empirical analysis of how author-provided rankings could be leveraged to improve peer review processes at machine learning conferences. We focus on the Isotonic Mechanism, which calibrates raw review scores using the author-provided rankings. Our analysis shows that these ranking-calibrated scores out-perform the raw review scores in estimating the ground truth “expected review scores” in terms of both squared and absolute error metrics. Furthermore, we propose several cautious, low-risk applications of the Isotonic Mechanism and author-provided rankings in peer review, including supporting senior area chairs in overseeing area chairs’ recommendations, assisting in the selection of paper awards, and guiding the recruitment of emergency reviewers. |
| Document Type: | Article |
| Language: | English |
| ISSN: | 1537-274X 0162-1459 |
| DOI: | 10.1080/01621459.2025.2510006 |
| DOI: | 10.17615/zhk4-4y76 |
| DOI: | 10.48550/arxiv.2408.13430 |
| Access URL: | http://arxiv.org/abs/2408.13430 |
| Rights: | CC BY |
| Accession Number: | edsair.doi.dedup.....cfb3c8de7eed28ef2996022ae68a25cb |
| Database: | OpenAIRE |
| Abstract: | We conducted an experiment during the review process of the 2023 International Conference on Machine Learning (ICML), asking authors with multiple submissions to rank their papers based on perceived quality. In total, we received 1,342 rankings, each from a different author, covering 2,592 submissions. In this paper, we present an empirical analysis of how author-provided rankings could be leveraged to improve peer review processes at machine learning conferences. We focus on the Isotonic Mechanism, which calibrates raw review scores using the author-provided rankings. Our analysis shows that these ranking-calibrated scores out-perform the raw review scores in estimating the ground truth “expected review scores” in terms of both squared and absolute error metrics. Furthermore, we propose several cautious, low-risk applications of the Isotonic Mechanism and author-provided rankings in peer review, including supporting senior area chairs in overseeing area chairs’ recommendations, assisting in the selection of paper awards, and guiding the recruitment of emergency reviewers. |
|---|---|
| ISSN: | 1537274X 01621459 |
| DOI: | 10.1080/01621459.2025.2510006 |
Full Text Finder
Nájsť tento článok vo Web of Science