Interrater reliability estimators tested against true interrater reliabilities
Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and j...
Saved in:
| Published in: | BMC medical research methodology Vol. 22; no. 1; pp. 1 - 19 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
London
BioMed Central
29.08.2022
Springer Nature B.V BMC |
| Subjects: | |
| ISSN: | 1471-2288, 1471-2288 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Background
Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate.
Almost all agree that percent agreement (a
o
), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters’ random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on.
The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies.
Methods
We conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested.
Results
The most criticized index, percent agreement (a
o
), showed as the most accurate predictor of reliability, reporting directional
r
2
= .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott’s π, Cohen’s κ and Krippendorff’s α, underperformed all other indices, reporting directional
r
2
= .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet’s AC
1
, emerged as the second-best predictor and the most accurate approximator. Bennett et al’s S ranked behind AC
1
, and Perreault and Leigh’s I
r
ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed a
o
, which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating.
Conclusion
The authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement. |
|---|---|
| AbstractList | Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate. Almost all agree that percent agreement (ao), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters' random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on. The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies.BACKGROUNDInterrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate. Almost all agree that percent agreement (ao), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters' random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on. The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies.We conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested.METHODSWe conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested.The most criticized index, percent agreement (ao), showed as the most accurate predictor of reliability, reporting directional r2 = .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott's π, Cohen's κ and Krippendorff's α, underperformed all other indices, reporting directional r2 = .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet's AC1, emerged as the second-best predictor and the most accurate approximator. Bennett et al's S ranked behind AC1, and Perreault and Leigh's Ir ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed ao, which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating.RESULTSThe most criticized index, percent agreement (ao), showed as the most accurate predictor of reliability, reporting directional r2 = .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott's π, Cohen's κ and Krippendorff's α, underperformed all other indices, reporting directional r2 = .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet's AC1, emerged as the second-best predictor and the most accurate approximator. Bennett et al's S ranked behind AC1, and Perreault and Leigh's Ir ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed ao, which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating.The authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement.CONCLUSIONThe authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement. Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate. Almost all agree that percent agreement (a o ), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters’ random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on. The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies. Methods We conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested. Results The most criticized index, percent agreement (a o ), showed as the most accurate predictor of reliability, reporting directional r 2 = .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott’s π, Cohen’s κ and Krippendorff’s α, underperformed all other indices, reporting directional r 2 = .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet’s AC 1 , emerged as the second-best predictor and the most accurate approximator. Bennett et al’s S ranked behind AC 1 , and Perreault and Leigh’s I r ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed a o , which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating. Conclusion The authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement. Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate. Almost all agree that percent agreement (ao), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters’ random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on. The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies. Methods We conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested. Results The most criticized index, percent agreement (ao), showed as the most accurate predictor of reliability, reporting directional r2 = .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott’s π, Cohen’s κ and Krippendorff’s α, underperformed all other indices, reporting directional r2 = .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet’s AC1, emerged as the second-best predictor and the most accurate approximator. Bennett et al’s S ranked behind AC1, and Perreault and Leigh’s Ir ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed ao, which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating. Conclusion The authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement. Abstract Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many disciplines including medical and health research to measure the quality of ratings, coding, diagnoses, or other observations and judgements. While numerous indices of interrater reliability are available, experts disagree on which ones are legitimate or more appropriate. Almost all agree that percent agreement (ao), the oldest and the simplest index, is also the most flawed because it fails to estimate and remove chance agreement, which is produced by raters’ random rating. The experts, however, disagree on which chance estimators are legitimate or better. The experts also disagree on which of the three factors, rating category, distribution skew, or task difficulty, an index should rely on to estimate chance agreement, or which factors the known indices in fact rely on. The most popular chance-adjusted indices, according to a functionalist view of mathematical statistics, assume that all raters conduct intentional and maximum random rating while typical raters conduct involuntary and reluctant random rating. The mismatches between the assumed and the actual rater behaviors cause the indices to rely on mistaken factors to estimate chance agreement, leading to the numerous paradoxes, abnormalities, and other misbehaviors of the indices identified by prior studies. Methods We conducted a 4 × 8 × 3 between-subject controlled experiment with 4 subjects per cell. Each subject was a rating session with 100 pairs of rating by two raters, totaling 384 rating sessions as the experimental subjects. The experiment tested seven best-known indices of interrater reliability against the observed reliabilities and chance agreements. Impacts of the three factors, i.e., rating category, distribution skew, and task difficulty, on the indices were tested. Results The most criticized index, percent agreement (ao), showed as the most accurate predictor of reliability, reporting directional r 2 = .84. It was also the third best approximator, overestimating observed reliability by 13 percentage points on average. The three most acclaimed and most popular indices, Scott’s π, Cohen’s κ and Krippendorff’s α, underperformed all other indices, reporting directional r 2 = .312 and underestimated reliability by 31.4 ~ 31.8 points. The newest index, Gwet’s AC1, emerged as the second-best predictor and the most accurate approximator. Bennett et al’s S ranked behind AC1, and Perreault and Leigh’s Ir ranked the fourth both for prediction and approximation. The reliance on category and skew and failure to rely on difficulty explain why the six chance-adjusted indices often underperformed ao, which they were created to outperform. The evidence corroborated the notion that the chance-adjusted indices assume intentional and maximum random rating while the raters instead exhibited involuntary and reluctant random rating. Conclusion The authors call for more empirical studies and especially more controlled experiments to falsify or qualify this study. If the main findings are replicated and the underlying theories supported, new thinking and new indices may be needed. Index designers may need to refrain from assuming intentional and maximum random rating, and instead assume involuntary and reluctant random rating. Accordingly, the new indices may need to rely on task difficulty, rather than distribution skew or rating category, to estimate chance agreement. |
| ArticleNumber | 232 |
| Author | Liu, Piper Liping Feng, Guangchao Charles Ao, Song Harris Zhao, Xinshu |
| Author_xml | – sequence: 1 givenname: Xinshu surname: Zhao fullname: Zhao, Xinshu email: xszhao@um.edu.mo organization: Department of Communication, Faculty of Social Sciences, University of Macau – sequence: 2 givenname: Guangchao Charles surname: Feng fullname: Feng, Guangchao Charles organization: Department of Communication, Faculty of Social Sciences, University of Macau – sequence: 3 givenname: Song Harris surname: Ao fullname: Ao, Song Harris organization: Department of Communication, Faculty of Social Sciences, University of Macau – sequence: 4 givenname: Piper Liping surname: Liu fullname: Liu, Piper Liping organization: Department of Communication, Faculty of Social Sciences, University of Macau |
| BookMark | eNp9UctqHDEQFMEhfv5ATgO55DKJ3o9LIJg8Fox98V1oNK2NltmRI2kC_vtoPSaJTfBFatRVpequU3Q0pxkQekvwB0K0_FgI1Yr3mNIeE4VVL16hE8IV6SnV-uif-hidlrLDDaWZfIOOmcRMay5P0PVmrpCza0eXYYpuiFOs9x2UGveuply62moYO7d1cS61q3mBLv6PFaGco9fBTQUuHu8zdPv1y-3l9_7q5tvm8vNV7wVRtQceSHDcAJV8UFR6rIGqUbABE0Y0YTSAZk7wUahAVBuWAphRjSYEj4Gdoc0qOya3s3e5Wc33NrloHx5S3lqXa_QTWPBaSoKNwUJzZ8JAiVfSu6Cog8HIpvVp1bpbhj2MHuaa3fRE9Glnjj_sNv2yhlNJ6UHg_aNATj-Xtiy7j8XDNLkZ0lIsVVhToZTkDfruGXSXljy3TTUUEZxhbFRD0RXlcyolQ_hjhmB7SN6uyduWvH1I3opG0s9IPlZXYzqYjtPLVLZSS_tn3kL-6-oF1m_T38Sz |
| CitedBy_id | crossref_primary_10_1080_09638288_2024_2390047 crossref_primary_10_1177_00986283241293413 crossref_primary_10_1136_jcp_2023_209048 crossref_primary_10_1080_13669877_2024_2437629 crossref_primary_10_1177_20552076241277458 crossref_primary_10_61186_shp_2025_2044626_1034 crossref_primary_10_1016_j_rmal_2025_100205 crossref_primary_10_1186_s12889_023_15069_0 crossref_primary_10_1177_20552076251328598 crossref_primary_10_3389_fpsyt_2025_1562414 crossref_primary_10_4103_IJO_IJO_2060_23 crossref_primary_10_1016_j_jretconser_2023_103277 crossref_primary_10_1038_s41598_024_82206_z crossref_primary_10_1080_09602011_2024_2343150 crossref_primary_10_1093_nutrit_nuaf052 crossref_primary_10_1109_ACCESS_2025_3534637 crossref_primary_10_3390_diagnostics15080976 crossref_primary_10_1007_s11135_023_01639_2 crossref_primary_10_1016_j_cpr_2024_102502 crossref_primary_10_1080_10474412_2025_2556756 crossref_primary_10_1016_j_joen_2024_09_006 crossref_primary_10_13060_csr_2024_007 crossref_primary_10_1016_j_jbmt_2024_04_040 crossref_primary_10_12688_f1000research_125198_2 crossref_primary_10_1038_s41598_025_94208_6 crossref_primary_10_1080_16506073_2025_2465745 crossref_primary_10_1007_s11417_024_09429_x crossref_primary_10_12688_f1000research_151493_2 crossref_primary_10_12688_f1000research_151493_1 |
| Cites_doi | 10.1609/hcomp.v5i1.13306 10.1111/j.1468-2958.2002.tb00826.x 10.1007/978-3-319-77249-3_6 10.1007/978-1-349-19051-5_6 10.1177/001316447003000105 10.1007/978-1-4899-4541-9 10.4324/9780429464287 10.4324/9780203551691 10.1207/S15327663JCP1001&2_06 10.1126/science.2648573 10.1016/0895-4356(95)00571-4 10.1001/archpsyc.1985.01790300093012 10.1177/001316448104100307 10.1177/0013164415574086 10.1136/bmj.330.7500.1121 10.1214/aos/1176344552 10.1348/000711006X126600 10.1038/d41586-019-00857-9 10.1086/266577 10.1037/11281-000 10.1080/00031305.2016.1154108 10.1080/13548506.2021.2014910 10.1007/s11336-007-9054-8 10.1007/s11135-014-0034-7 10.1371/journal.pone.0149787 10.1371/journal.pone.0222916 10.1016/0895-4356(90)90158-L 10.2307/270787 10.1027/1614-2241/a000086 10.1037/0033-2909.103.3.374 10.1080/00031305.2019.1583913 10.1177/002224379002700206 10.1080/19312450709336664 10.20982/tqmp.16.5.p467 10.1207/S15328031US0203_03 10.1111/j.1460-2466.1970.tb00883.x 10.1007/978-94-024-0881-2_11 10.1177/001316446002000104 10.1007/s11135-013-9956-8 10.1001/archpsyc.1981.01780290042004 10.1037/h0057955 10.1037/0096-1523.3.4.552 10.1086/266520 10.1007/s11135-012-9745-9 10.1007/s11135-012-9807-z 10.1177/002224378902600201 10.1037/0003-066X.54.8.594 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2022 2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022. The Author(s). |
| Copyright_xml | – notice: The Author(s) 2022 – notice: 2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022. The Author(s). |
| DBID | C6C AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU COVID DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
| DOI | 10.1186/s12874-022-01707-5 |
| DatabaseName | SpringerOpen CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni Edition) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College Coronavirus Research Database ProQuest Central Korea Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni) Medical Database ProQuest One Academic ProQuest One Academic (New) Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals (WRLC) |
| DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition Coronavirus Research Database ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic Publicly Available Content Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine |
| EISSN | 1471-2288 |
| EndPage | 19 |
| ExternalDocumentID | oai_doaj_org_article_ec86610990584a9fb21c76caf72aeb96 PMC9426226 10_1186_s12874_022_01707_5 |
| GrantInformation_xml | – fundername: Jiangxi Normal University grantid: 2018-08-10 funderid: http://dx.doi.org/10.13039/501100012135 – fundername: Universidade de Macau grantid: CRG2021-00002-ICI, ICI-RTO-0010-2021, CPG2021-00028-FSS, SRG2018-00143-FSS funderid: http://dx.doi.org/10.13039/501100004733 – fundername: Macau Higher Education Fund grantid: HSS-UMAC-2020-02 – fundername: ; grantid: 2018-08-10 – fundername: ; grantid: HSS-UMAC-2020-02 – fundername: ; grantid: CRG2021-00002-ICI, ICI-RTO-0010-2021, CPG2021-00028-FSS, SRG2018-00143-FSS |
| GroupedDBID | --- 0R~ 23N 2WC 53G 5VS 6J9 6PF 7X7 88E 8FI 8FJ AAFWJ AAJSJ AASML AAWTL ABDBF ABUWG ACGFO ACGFS ACIHN ACUHS ADBBV ADRAZ ADUKV AEAQA AENEX AFKRA AFPKN AHBYD AHMBA AHYZX ALMA_UNASSIGNED_HOLDINGS AMKLP AMTXH AOIJS BAPOH BAWUL BCNDV BENPR BFQNJ BMC BPHCQ BVXVI C6C CCPQU CS3 DIK DU5 E3Z EAD EAP EAS EBD EBLON EBS EMB EMK EMOBN ESX F5P FYUFA GROUPED_DOAJ GX1 HMCUK IAO IHR INH INR ITC KQ8 M1P M48 MK0 M~E O5R O5S OK1 OVT P2P PGMZT PHGZM PHGZT PIMPY PJZUB PPXIY PQQKQ PROAC PSQYO PUEGO RBZ RNS ROL RPM RSV SMD SOJ SV3 TR2 TUS UKHRP W2D WOQ WOW XSB AAYXX AFFHD CITATION 3V. 7XB 8FK AZQEC COVID DWQXO K9. PKEHL PQEST PQUKI PRINS 7X8 5PM |
| ID | FETCH-LOGICAL-c517t-e4f1fa49e264b726c08e27d53b01318132fe83a54d57f171862ee9d7d9ffc0e3 |
| IEDL.DBID | BENPR |
| ISICitedReferencesCount | 33 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000847332700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1471-2288 |
| IngestDate | Fri Oct 03 12:42:49 EDT 2025 Tue Nov 04 01:51:11 EST 2025 Wed Oct 01 14:26:23 EDT 2025 Tue Oct 07 05:22:03 EDT 2025 Sat Nov 29 06:39:02 EST 2025 Tue Nov 18 22:35:12 EST 2025 Sat Sep 06 07:35:36 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Interrater reliability Krippendorff’s alpha Reconstructed experiment Intercoder reliability Cohen’s kappa |
| Language | English |
| License | Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c517t-e4f1fa49e264b726c08e27d53b01318132fe83a54d57f171862ee9d7d9ffc0e3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| OpenAccessLink | https://www.proquest.com/docview/2715430097?pq-origsite=%requestingapplication% |
| PMID | 36038846 |
| PQID | 2715430097 |
| PQPubID | 42579 |
| PageCount | 19 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_ec86610990584a9fb21c76caf72aeb96 pubmedcentral_primary_oai_pubmedcentral_nih_gov_9426226 proquest_miscellaneous_2708257764 proquest_journals_2715430097 crossref_primary_10_1186_s12874_022_01707_5 crossref_citationtrail_10_1186_s12874_022_01707_5 springer_journals_10_1186_s12874_022_01707_5 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-08-29 |
| PublicationDateYYYYMMDD | 2022-08-29 |
| PublicationDate_xml | – month: 08 year: 2022 text: 2022-08-29 day: 29 |
| PublicationDecade | 2020 |
| PublicationPlace | London |
| PublicationPlace_xml | – name: London |
| PublicationTitle | BMC medical research methodology |
| PublicationTitleAbbrev | BMC Med Res Methodol |
| PublicationYear | 2022 |
| Publisher | BioMed Central Springer Nature B.V BMC |
| Publisher_xml | – name: BioMed Central – name: Springer Nature B.V – name: BMC |
| References | RL Wasserstein (1707_CR48) 2019; 73 1707_CR71 AP Kirilenko (1707_CR73) 2016; 11 R Delgado (1707_CR28) 2019; 14 X Zhao (1707_CR31) 2018; 40 R Artstein (1707_CR1) 2017 CA Lantz (1707_CR66) 1996; 49 B Efron (1707_CR33) 1993 GC Feng (1707_CR61) 2013 R Benini (1707_CR11) 1901 N Lazar (1707_CR50) 2019 KH Krippendorff (1707_CR19) 1980 KH Krippendorff (1707_CR43) 1970; 2 1707_CR21 1707_CR24 1707_CR23 1707_CR25 D Riffe (1707_CR44) 1998 RL Wasserstein (1707_CR46) 2016; 70 KL Gwet (1707_CR59) 2008; 61 MA Hughes (1707_CR13) 1990; 27 D Riffe (1707_CR45) 2014 1707_CR27 GC Feng (1707_CR58) 2016; 12 R Popping (1707_CR8) 1988 AR Feinstein (1707_CR69) 1990; 43 X Zhao (1707_CR52) 2016 RM Dawes (1707_CR75) 1989; 243 KL Gwet (1707_CR38) 2010 KH Krippendorff (1707_CR55) 2004; 30 D Cousineau (1707_CR72) 2015; 75 PE Meehl (1707_CR74) 1954 D ten Hove (1707_CR22) 2018 KL Gwet (1707_CR40) 2008; 61 GC Feng (1707_CR62) 2013; 47 B Fischhoff (1707_CR65) 1977; 3 D Riffe (1707_CR9) 2005 B Efron (1707_CR32) 1979; 7 F Attneave (1707_CR64) 1953; 46 WM Grove (1707_CR29) 1981; 38 1707_CR39 LM Hsu (1707_CR54) 2003; 2 X Zhao (1707_CR26) 2018; 14 KH Krippendorff (1707_CR56) 2013; 36 M Lombard (1707_CR57) 2002; 28 J Cohen (1707_CR2) 1960; 20 WD Perreault (1707_CR7) 1989; 26 DC Montgomery (1707_CR36) 2009 GC Feng (1707_CR5) 2015; 11 K Grayson (1707_CR6) 2001; 10 GC Feng (1707_CR4) 2014; 48 AF Hayes (1707_CR12) 2007; 1 J Shao (1707_CR34) 1995 GC Feng (1707_CR63) 2013; 47 CM Button (1707_CR20) 2020; 16 V Amrhein (1707_CR47) 2019; 567 EL Spitznagel (1707_CR67) 1985; 42 JS Liu (1707_CR35) 2001 AR Feinstein (1707_CR70) 1990; 43 D Riffe (1707_CR30) 2019 KL Gwet (1707_CR41) 2012 X Zhao (1707_CR14) 2013; 36 PL Liu (1707_CR51) 2021; 00 R Zwick (1707_CR10) 1988; 103 X Zhao (1707_CR53) 2022; 75 KH Krippendorff (1707_CR42) 1970; 30 GC Feng (1707_CR3) 2014; 48 EM Bennett (1707_CR15) 1954; 18 KH Krippendorff (1707_CR18) 1970; 20 L Wilkinson (1707_CR49) 1999; 54 JAHR Claassen (1707_CR37) 2005; 330 RL Brennan (1707_CR68) 1981; 41 KL Gwet (1707_CR60) 2008; 73 WA Scott (1707_CR16) 1955; 19 KH Krippendorff (1707_CR17) 1970; 30 |
| References_xml | – ident: 1707_CR21 doi: 10.1609/hcomp.v5i1.13306 – volume: 28 start-page: 587 issue: 4 year: 2002 ident: 1707_CR57 publication-title: Hum Commun Res doi: 10.1111/j.1468-2958.2002.tb00826.x – start-page: 197 volume-title: Handbook of inter-rater reliability: the definitive guide to measuring the extent of agreement among raters year: 2010 ident: 1707_CR38 – start-page: 67 volume-title: Quantitative psychology: the 82nd annual meeting of the psychometric society, Zurich, Switzerland, 2017 year: 2018 ident: 1707_CR22 doi: 10.1007/978-3-319-77249-3_6 – start-page: 90 volume-title: Sociometric research: Volume I, data collection and scaling year: 1988 ident: 1707_CR8 doi: 10.1007/978-1-349-19051-5_6 – volume: 30 start-page: 61 year: 1970 ident: 1707_CR17 publication-title: Educ Psychol Meas doi: 10.1177/001316447003000105 – ident: 1707_CR25 – start-page: 257 volume-title: An introduction to the bootstrap year: 1993 ident: 1707_CR33 doi: 10.1007/978-1-4899-4541-9 – volume-title: Analyzing media messages: Using quantitative content analysis in research year: 2019 ident: 1707_CR30 doi: 10.4324/9780429464287 – volume-title: Analyzing media messages: using quantitative content analysis in research year: 2014 ident: 1707_CR45 doi: 10.4324/9780203551691 – volume: 10 start-page: 71 issue: 1/2 year: 2001 ident: 1707_CR6 publication-title: J Consum Psychol doi: 10.1207/S15327663JCP1001&2_06 – volume: 243 start-page: 1668 issue: 4899 year: 1989 ident: 1707_CR75 publication-title: Science (80- ) doi: 10.1126/science.2648573 – volume: 49 start-page: 431 issue: 4 year: 1996 ident: 1707_CR66 publication-title: J Clin Epidemiol doi: 10.1016/0895-4356(95)00571-4 – volume: 42 start-page: 725 issue: 7 year: 1985 ident: 1707_CR67 publication-title: Arch Gen Psychiatry doi: 10.1001/archpsyc.1985.01790300093012 – volume: 41 start-page: 687 issue: 3 year: 1981 ident: 1707_CR68 publication-title: Educ Psychol Meas doi: 10.1177/001316448104100307 – volume: 12 start-page: 145 issue: 4 year: 2016 ident: 1707_CR58 publication-title: Methodol Eur J Res Methods Behav Soc Sci – volume: 75 start-page: 979 issue: 6 year: 2015 ident: 1707_CR72 publication-title: Educ Psychol Meas doi: 10.1177/0013164415574086 – volume: 40 start-page: 140 issue: 2 year: 2018 ident: 1707_CR31 publication-title: Chin J J Commun – volume: 330 start-page: 1121 issue: 7500 year: 2005 ident: 1707_CR37 publication-title: BMJ doi: 10.1136/bmj.330.7500.1121 – volume: 7 start-page: 1 issue: 1 year: 1979 ident: 1707_CR32 publication-title: Ann Stat doi: 10.1214/aos/1176344552 – volume-title: Analyzing media messages: using quantitative content analysis in research year: 1998 ident: 1707_CR44 – volume: 61 start-page: 29 issue: 1 year: 2008 ident: 1707_CR59 publication-title: Br J Math Stat Psychol doi: 10.1348/000711006X126600 – volume: 567 start-page: 305 year: 2019 ident: 1707_CR47 publication-title: Nature doi: 10.1038/d41586-019-00857-9 – volume: 19 start-page: 321 issue: 3 year: 1955 ident: 1707_CR16 publication-title: Public Opin Q doi: 10.1086/266577 – volume-title: Clinical versus statistical prediction: a theoretical analysis and a review of the evidence year: 1954 ident: 1707_CR74 doi: 10.1037/11281-000 – volume-title: Content analysis: an introduction to its methodology year: 1980 ident: 1707_CR19 – volume: 70 start-page: 129 issue: 2 year: 2016 ident: 1707_CR46 publication-title: Am Stat doi: 10.1080/00031305.2016.1154108 – volume: 00 start-page: 1 issue: 00 year: 2021 ident: 1707_CR51 publication-title: Psychol Heal Med doi: 10.1080/13548506.2021.2014910 – volume: 75 start-page: 5 issue: 3 year: 2022 ident: 1707_CR53 publication-title: J Commun Rev – volume: 73 start-page: 407 issue: 3 year: 2008 ident: 1707_CR60 publication-title: Psychometrika doi: 10.1007/s11336-007-9054-8 – volume: 30 start-page: 61 issue: 1 year: 1970 ident: 1707_CR42 publication-title: Educ Psychol Meas doi: 10.1177/001316447003000105 – volume: 48 start-page: 2355 issue: 4 year: 2014 ident: 1707_CR3 publication-title: Qual Quant doi: 10.1007/s11135-014-0034-7 – volume: 11 start-page: e0149787 issue: 3 year: 2016 ident: 1707_CR73 publication-title: PLoS One doi: 10.1371/journal.pone.0149787 – volume: 14 start-page: 1 issue: 9 year: 2019 ident: 1707_CR28 publication-title: PLoS One doi: 10.1371/journal.pone.0222916 – volume: 43 start-page: 551 issue: 6 year: 1990 ident: 1707_CR69 publication-title: J Clin Epidemiol doi: 10.1016/0895-4356(90)90158-L – volume: 2 start-page: 139 year: 1970 ident: 1707_CR43 publication-title: Sociol Methodol doi: 10.2307/270787 – volume: 11 start-page: 13 issue: 1 year: 2015 ident: 1707_CR5 publication-title: Methodology doi: 10.1027/1614-2241/a000086 – volume-title: Analyzing media messages: using quantitative content analysis in research year: 2005 ident: 1707_CR9 – volume: 103 start-page: 374 issue: 3 year: 1988 ident: 1707_CR10 publication-title: Psychol Bull doi: 10.1037/0033-2909.103.3.374 – volume: 73 start-page: 1 issue: sup1 year: 2019 ident: 1707_CR48 publication-title: Am Stat doi: 10.1080/00031305.2019.1583913 – volume: 27 start-page: 185 issue: 2 year: 1990 ident: 1707_CR13 publication-title: J Mark Res doi: 10.1177/002224379002700206 – start-page: 516 volume-title: The jackknife and bootstrap. Springer series in statistics year: 1995 ident: 1707_CR34 – volume: 1 start-page: 77 issue: 1 year: 2007 ident: 1707_CR12 publication-title: Commun Methods Meas doi: 10.1080/19312450709336664 – volume: 16 start-page: 467 issue: 5 year: 2020 ident: 1707_CR20 publication-title: Quant Methods Psychol doi: 10.20982/tqmp.16.5.p467 – ident: 1707_CR71 – ident: 1707_CR23 – volume-title: Design and analysis of experiments year: 2009 ident: 1707_CR36 – volume: 2 start-page: 205 issue: 3 year: 2003 ident: 1707_CR54 publication-title: Underst Stat doi: 10.1207/S15328031US0203_03 – volume: 20 start-page: 241 year: 1970 ident: 1707_CR18 publication-title: J Commun doi: 10.1111/j.1460-2466.1970.tb00883.x – volume: 30 start-page: 411 issue: 3 year: 2004 ident: 1707_CR55 publication-title: Hum Commun Res – volume: 61 start-page: 29 issue: 1 year: 2008 ident: 1707_CR40 publication-title: Br J Math Stat Psychol doi: 10.1348/000711006X126600 – volume-title: Monte Carlo strategies in scientific computing year: 2001 ident: 1707_CR35 – start-page: 297 volume-title: Handbook of linguistic annotation year: 2017 ident: 1707_CR1 doi: 10.1007/978-94-024-0881-2_11 – ident: 1707_CR27 – volume: 20 start-page: 37 issue: 1 year: 1960 ident: 1707_CR2 publication-title: Educ Psychol Meas doi: 10.1177/001316446002000104 – volume: 48 start-page: 1803 issue: 3 year: 2014 ident: 1707_CR4 publication-title: Qual Quant doi: 10.1007/s11135-013-9956-8 – volume: 38 start-page: 408 issue: 4 year: 1981 ident: 1707_CR29 publication-title: Arch Gen Psychiatry doi: 10.1001/archpsyc.1981.01780290042004 – volume-title: Presentation at the School of Statistics and Center for Data Sciences Beijing Normal University, 25th December year: 2016 ident: 1707_CR52 – volume: 46 start-page: 81 issue: 2 year: 1953 ident: 1707_CR64 publication-title: J Exp Psychol doi: 10.1037/h0057955 – volume: 3 start-page: 552 issue: 4 year: 1977 ident: 1707_CR65 publication-title: J Exp Psychol Hum Percept Perform doi: 10.1037/0096-1523.3.4.552 – volume: 36 start-page: 419 issue: 1 year: 2013 ident: 1707_CR14 publication-title: Ann Int Commun Assoc – volume: 43 start-page: 543 issue: 6 year: 1990 ident: 1707_CR70 publication-title: J Clin Epidemiol doi: 10.1016/0895-4356(90)90158-L – volume: 18 start-page: 303 year: 1954 ident: 1707_CR15 publication-title: Public Opin Q doi: 10.1086/266520 – ident: 1707_CR39 – volume-title: Principii di Demongraphia: Manuali Barbera Di Scienze Giuridiche Sociali e Politiche (no. 29) [Principles of demographics (Barbera manuals of jurisprudence and social policy)] year: 1901 ident: 1707_CR11 – start-page: 197 volume-title: Handbook of inter-rater reliability: the definitive guide to measuring the extent of agreement among multiple raters year: 2012 ident: 1707_CR41 – volume: 47 start-page: 2959 issue: 5 year: 2013 ident: 1707_CR62 publication-title: Qual Quant doi: 10.1007/s11135-012-9745-9 – volume: 47 start-page: 2983 issue: 5 year: 2013 ident: 1707_CR63 publication-title: Qual Quant doi: 10.1007/s11135-012-9807-z – ident: 1707_CR24 – volume: 36 start-page: 481 issue: 1 year: 2013 ident: 1707_CR56 publication-title: Ann Int Commun Assoc – volume-title: Time to say goodbye to “statistically significant” and embrace uncertainty, say statisticians year: 2019 ident: 1707_CR50 – volume: 14 start-page: 1 issue: 2 year: 2018 ident: 1707_CR26 publication-title: China Media Res – volume: 26 start-page: 135 issue: 2 year: 1989 ident: 1707_CR7 publication-title: J Mark Res doi: 10.1177/002224378902600201 – volume: 54 start-page: 594 issue: 8 year: 1999 ident: 1707_CR49 publication-title: Am Psychol doi: 10.1037/0003-066X.54.8.594 – volume-title: Indexing versus modeling intercoder reliability year: 2013 ident: 1707_CR61 |
| SSID | ssj0017836 |
| Score | 2.4943054 |
| Snippet | Background
Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used... Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used... Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It is used across many... Abstract Background Interrater reliability, aka intercoder reliability, is defined as true agreement between raters, aka coders, without chance agreement. It... |
| SourceID | doaj pubmedcentral proquest crossref springer |
| SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Agreements Behavior Benchmarks Cohen’s kappa Experiments Health Sciences Intercoder reliability Interrater reliability Kappa coefficient Krippendorff’s alpha Medicine Medicine & Public Health Monte Carlo simulation Reconstructed experiment Statistical Theory and Methods Statistics for Life Sciences Theory of Medicine/Bioethics Variables |
| SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals (WRLC) dbid: DOA link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3dT9wwDLcmNCFe0BhMdByok3iD6q5fSfMIE2gP22kPJ3RvUZq6UOnoofuYxH8_O22PFQ144bVJlMSxazuOfwY4jQ2SkpAc-i_IQbGjPDBylJBcYaFygbl0ZTpvfsrxOJtO1e9_Sn3xm7AGHrgh3BBtJoQL35CqNKrMo9BKYU0pI4O5cmDbZPV0zlQbP-DchC5FJhPDZciw7gG_XGe8GBmkPTXk0Pp7JubzB5LPoqRO-Vx_gt3WavQvmtXuwQesP8P2rzYuvg9jd7HHqA8Lf4GzqgHffvQZQuOe3eqlv3IXm765NRVZhP5qsUa_-t8ocp0PYHJ9Nfn-I2grJQQ2DeUqwKQMS5MoJPOGqCvsKMNIFmnMt5ykw-OoxCw2aVKksgxJHYkIURWyUGVpRxh_ga16XuMh-GRepCjozxNintjEZozHnpo4N0aoLJcehB3dtG1RxLmYxUw7byITuqG1JlprR2udenC2GfPQYGi82vuSj2PTk_Gv3QfiCt1yhX6LKzwYdIepW6Fc6kiSvRhz5ooH3zbNJE4cIzE1ztfch31mKUXigewxQW9B_Za6unPA3Irh_SOa_Lxjl6fJX97w1_fY8BHsRI69OadmAFvMRcfw0f5ZVcvFiROOv3npEvc priority: 102 providerName: Directory of Open Access Journals – databaseName: SpringerLink Contemporary (1997 - Present) dbid: RSV link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NT90wDLcGm6ZdYGxDKx9TJ-22Vbx-5eMICLTD9jRtCHGL0tSFStA3tY9J_PfYee1DRQwJrk2ipI4d23H8M8CX1CIpCcmh_5IcFDcpIisnGckVlroQWEhfpvP0h5xO1dmZ_tUnhXXDa_chJOlPai_WSux1MUOzR_z6nDFfZJSvwEtSd4rF8fef02XsgPMShvSYB8eNVJBH6h-Zl_cfR96LkHrFc7z-vCW_hbXe0Az3F5yxAS-weQevf_ah9Pcw9XeBDBTRhi1e1gu87puQUTeu2BPvwrm_Cw3tua3JiAzn7TWG9UOjyNv-ACfHRyeH36O-uELk8ljOI8yquLKZRrKIaEOEmyhMZJmnfDFKaj9NKlSpzbMyl1VMGkwkiLqUpa4qN8F0E1abWYMfISSLJEdBh1WMReYypxjCPbdpYa3QqpABxAO5jeuBx7n-xaXxDogSZkEnQ3Qynk4mD-DrcszfBezGo70PeBeXPRky23-Yteeml0CDTgnh44Bkc1ldFUnspHC2konFQosAdgYeML0cdyaRZGKmnOwSwOdlM0kgh1Vsg7Nr7sNutpQiC0COeGe0oHFLU194LG_NFQESmvzbwEF3k___h7ee1n0b3iSeCTnhZgdWmV924ZX7N6-79pOXnlup-xb3 priority: 102 providerName: Springer Nature |
| Title | Interrater reliability estimators tested against true interrater reliabilities |
| URI | https://link.springer.com/article/10.1186/s12874-022-01707-5 https://www.proquest.com/docview/2715430097 https://www.proquest.com/docview/2708257764 https://pubmed.ncbi.nlm.nih.gov/PMC9426226 https://doaj.org/article/ec86610990584a9fb21c76caf72aeb96 |
| Volume | 22 |
| WOSCitedRecordID | wos000847332700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVADU databaseName: BioMed Central Open Access Free customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: RBZ dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.biomedcentral.com/search/ providerName: BioMedCentral – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: DOA dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: M~E dateStart: 20010101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: 7X7 dateStart: 20090101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: BENPR dateStart: 20090101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: PIMPY dateStart: 20090101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1471-2288 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017836 issn: 1471-2288 databaseCode: RSV dateStart: 20011201 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1La9wwEB6aTSm9NH1St-niQm-tyfqlx6kkJaGFZlnSELYnIcvjxJB6E3sT6L-vRitvcGhyycVgW0KyZ0bzkr4B-JRqtEqCU-q_tA6KmRSR5pPMyhWWsmBYcFem8-Qnn07FfC5nPuDW-W2V_ZroFupyYShGvpNwq-xTOnbw9eIyoqpRlF31JTQ2YJOQyrIRbO7tT2dH6zwCnVHoj8oIttPFBO8e0Q52wo3hUT5QRw61f2Bq3t4oeStb6pTQwdZDp_8cnnnzM9xd8csLeITNS3hy6BPsr2DqIoQEH9GGLZ7XKxTvvyFhcfwh_7wLly5CGupTXVvTMly2VxjW_-tlffDXcHywf_zte-RLLkQmj_kywqyKK51JtHaSJRMzE4EJL_OUwqXWGEiTCkWq86zMeRVbvcYSRFnyUlaVmWD6BkbNosG3EFo7JUdml7AYi8xkRhCwe67TQmsmRcEDiPsfr4yHI6eqGOfKuSWCqRWxlCWWcsRSeQCf130uVmAc97beI3quWxKQtnuwaE-Vl0uFRjDmsoPWEtOyKpLYcGZ0xRONhWQBbPf0VF66O3VDzAA-rl9buaRki25wcUVtyPnmnGUB8AEXDSY0fNPUZw7hW1KdgMQO_qXnt5vB7_7gd_fP9T08TRzn07GbbRgRf3yAx-Z6WXftGDb4nLurGHspGrsAhb2b_Tic_bZ3R79O_gF6_Cez |
| linkProvider | ProQuest |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VLaJceBY1pUCQ4ARRN05iJweEeFVddXe1hxUqJ8txJiVSm22TLVV_FP8Rj5NslQp664Fr7CRO_HlmPOP5BuBNoNAoCUGh_8xsUPQw9ZQYhmZdYZakHFNhy3R-H4vpND48TGZr8LvLhaFjlZ1MtII6W2jyke8yYZR9QGkHH0_PPKoaRdHVroRGA4sDvLwwW7b6w-irmd-3jO19m3_Z99qqAp6OfLH0MMz9XIUJGlPAjITrYYxMZFFAHkGj7wKWYxyoKMwikftGdHOGmGQiS_JcDzEwj70D66HBejyA9dloMvuxCltQSkSXmRPz3donNnmPDswTTY3wop72s0UCepbt9XOZ14KzVuftPfzP_tYjeNAa1-6nZjU8hjUsn8C9SXt84ClMrf-TyDEqt8LjouEov3SJaeSEvA-1u7T-X1cdqcIYzu6yOke3-NtdBdabML-Nj3kGg3JR4ha4xgqLkBsB7WMa6lDHRFsfqSBViidxKhzwu3mWuiVbp5ofx9JuumIuG2xIgw1psSEjB96t7jltqEZu7P2Z4LPqSTTh9sKiOpKt1JGoY85t7NPYmSrJU-ZrwbXKBVOYJtyBnQ4-spVdtbzCjgOvV81G6lAoSZW4OKc-5FoQgocOiB5oewPqt5TFT8tfnlAVBGZe_r6D99XL__3B2zeP9RVs7M8nYzkeTQ-ew31mFx0lGO3AgLDyAu7qX8uirl62i9YFecvA_wPCoH3r |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dT9wwDLcYIMTL-NgmOhgr0t5YxfUraR5h24lpcEIaQrxFaeqwStBDvYK0_35x2t4ogklor42jtI5d23H8M8CnWKE1EpxS_4UNUPQoDxQfJVavsBA5w5y7Np0XJ3wyyS4vxdmDKn53271PSbY1DYTSVDUHt4VpVTxjB7OQYNoDuolO-C88SF_BUkJNgyhe_3kxzyNQjUJfKvPkvIE5cqj9A1fz8UXJR9lSZ4TGa___-uvwunNA_cNWYjZgAatNWDntUuxvYOLOCAlAovZrvC5bHO_fPqFx3FCEPvMbd0bqqytVWufSb-o79MunZtko_C2cj7-dfzkOuqYLgU5D3gSYmNCoRKD1lOxGMT3KMOJFGtOBqXUH4shgFqs0KVJuQmvZWIQoCl4IY_QI43ewWE0r3ALfeiopMvsTCzFPdKIzgnZPVZwrxUSWcw_CnvVSd4Dk1BfjWrrAJGOy5ZO0fJKOTzL1YH8-57aF4_gn9RHt6JySoLTdg2l9JTvNlKgzxlx-0PpiSpg8CjVnWhkeKcwF82CnlwfZ6fdMRty6njEVwXiwNx-2mknpFlXh9I5oKPzmnCUe8IEcDV5oOFKVvxzGt6BOAZFd_HMvTX8Xf_6D37-M_COsnH0dy5Pvkx_bsBo5eaSanB1YJNH5AMv6viln9a5Tqj_erCK_ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Interrater+reliability+estimators+tested+against+true+interrater+reliabilities&rft.jtitle=BMC+medical+research+methodology&rft.au=Zhao%2C+Xinshu&rft.au=Feng%2C+Guangchao+Charles&rft.au=Ao%2C+Song+Harris&rft.au=Liu%2C+Piper+Liping&rft.date=2022-08-29&rft.issn=1471-2288&rft.eissn=1471-2288&rft.volume=22&rft.issue=1&rft.spage=232&rft_id=info:doi/10.1186%2Fs12874-022-01707-5&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1471-2288&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1471-2288&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1471-2288&client=summon |