When coders are reliable: The application of three measures to assess inter-rater reliability/agreement with doctor–patient communication data coded with the VR-CoDES

To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. The raw data from an int...

Full description

Saved in:
Bibliographic Details
Published in:Patient education and counseling Vol. 82; no. 3; pp. 341 - 345
Main Authors: Fletcher, Ian, Mazzi, Mariangela, Nuebling, Matthias
Format: Journal Article
Language:English
Published: Oxford Elsevier Ireland Ltd 01.03.2011
Elsevier
Subjects:
ISSN:0738-3991, 1873-5134, 1873-5134
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. The raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κ j , and an intraclass correlation coefficient (ICC). Minor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κ j (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC. Alternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.
AbstractList To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. The raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κ j , and an intraclass correlation coefficient (ICC). Minor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κ j (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC. Alternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.
To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. The raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC). Minor differences were found between Cohen's κ and an ICC model across paired data (largest margin=0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC. Alternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.
To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments.OBJECTIVETo investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments.The raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC).METHODSThe raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC).Minor differences were found between Cohen's κ and an ICC model across paired data (largest margin=0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates.RESULTSMinor differences were found between Cohen's κ and an ICC model across paired data (largest margin=0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates.It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC.CONCLUSIONIt is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC.Alternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.PRACTICE IMPLICATIONAlternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.
Abstract Objective To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. Methods The raw data from an inter-rater study with three coders were analysed with; Cohen's κ , sensitivity and specificity measures, Fleiss's multirater κ j , and an intraclass correlation coefficient (ICC). Results Minor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κ j (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. Conclusion It is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC. Practice implication Alternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.
Objective: To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments. Methods: The raw data from an inter-rater study with three coders were analysed with; Cohen's kappa, sensitivity and specificity measures, Fleiss's multirater kappa-j, and an intraclass correlation coefficient (ICC). Results: Minor differences were found between Cohen's kappa and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. kappa-j (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates. Conclusion: It is more practical to analyse nominal data with more than 2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's kappa or an ICC. Practice implication: Alternatives to Cohen's kappa are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients. [Copyright Elsevier B.V.]
Author Fletcher, Ian
Mazzi, Mariangela
Nuebling, Matthias
Author_xml – sequence: 1
  givenname: Ian
  surname: Fletcher
  fullname: Fletcher, Ian
  email: ian.fletcher@liverpool.ac.uk
  organization: Division of Clinical Psychology, University of Liverpool, UK
– sequence: 2
  givenname: Mariangela
  surname: Mazzi
  fullname: Mazzi, Mariangela
  organization: Department of Public Health and Community Medicine and Public Health, Section of Clinical Psychology, University of Verona, Italy
– sequence: 3
  givenname: Matthias
  surname: Nuebling
  fullname: Nuebling, Matthias
  organization: GEB: Gesellschaft für Empirische Beratung mbH (Empirical Consulting), Denzlingen, Germany
BackLink http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=25927078$$DView record in Pascal Francis
https://www.ncbi.nlm.nih.gov/pubmed/21316896$$D View this record in MEDLINE/PubMed
BookMark eNqFkt-K1DAUxousuLOrD-CN5Ea86mxO06aNgiCz6x9YENxBL0OannEytk1NUmXu9h18CZ_LJzHdmUFYcBdCAuH3fedwzneSHPW2xyR5CnQOFPjZZj6gnmcUYE7jofmDZAZVydICWH6UzGjJqpQJAcfJifcbSinnOTxKjjNgwCvBZ8nvL2vsibYNOk-UQ-KwNapu8SVZrpGoYWiNVsHYntgVCWuHSDpUfnToSbBEeY_eE9MHdKlT8d47mNaE7Zn6GgUd9oH8NGFNGquDdX-ufw3RcvrVtuvG_lChUUHd9NLs8BA7-PwpXdjzi6vHycOVaj0-2b-nyfLtxXLxPr38-O7D4s1lqgsQIQXMoC4oQ51zXmmBNeiybKiqUfCyxkxXBRVKFUJUdbESuS6QghIcm6ZgGTtNXuxsB2e_j-iD7IzX2LaqRzt6WZVUCM5Kfj9Z5KwUDCCSz_bkWHfYyMGZTrmtPGwhAs_3gPJatSunem38P64QWUnLKnLljtPOeu9wJbUJN6MLTplWApVTLuRGxlzIKReSxkPzqIRbyoP5XZpXOw3Gcf8w6KTXcWkaG-NQB9lYc6f69S21bs206fYbbtFv7Oj6uEcJ0meSyqspq1NUAWJMgU8G4v8G9xT_C3Ic-vI
CitedBy_id crossref_primary_10_1016_j_pec_2024_108241
crossref_primary_10_1016_j_pec_2020_01_018
crossref_primary_10_1016_j_pec_2012_05_001
crossref_primary_10_1016_j_pec_2021_03_042
crossref_primary_10_1007_s10461_019_02466_z
crossref_primary_10_1111_jpm_12279
crossref_primary_10_1111_joss_12061
crossref_primary_10_1111_hex_13502
crossref_primary_10_1007_s10111_020_00647_8
crossref_primary_10_1016_j_pec_2020_04_016
crossref_primary_10_1016_j_pec_2017_06_026
crossref_primary_10_1016_j_pec_2019_03_025
crossref_primary_10_1007_s00520_018_4484_7
crossref_primary_10_1002_etc_3010
crossref_primary_10_1016_j_jpsychores_2018_01_004
crossref_primary_10_1371_journal_pone_0263433
crossref_primary_10_1016_j_pec_2012_05_006
crossref_primary_10_1016_j_pec_2020_03_019
crossref_primary_10_1016_j_pec_2011_01_018
crossref_primary_10_1155_2022_4579178
Cites_doi 10.2307/2529310
10.1191/096228098672090967
10.1037/h0031619
10.1002/sim.4780132310
10.1037/0033-2909.86.2.420
10.1037/0033-2909.86.2.376
10.1177/001316446002000104
10.1037/h0026256
10.2466/pr0.1966.19.1.3
10.1037/1082-989X.1.1.30
10.1177/001316448104100307
ContentType Journal Article
Copyright 2011 Elsevier Ireland Ltd
Elsevier Ireland Ltd
Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Copyright_xml – notice: 2011 Elsevier Ireland Ltd
– notice: Elsevier Ireland Ltd
– notice: Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
DBID AAYXX
CITATION
IQODW
CGR
CUY
CVF
ECM
EIF
NPM
7X8
7QJ
DOI 10.1016/j.pec.2011.01.004
DatabaseName CrossRef
Pascal-Francis
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
Applied Social Sciences Index & Abstracts (ASSIA)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
Applied Social Sciences Index and Abstracts (ASSIA)
DatabaseTitleList
MEDLINE

MEDLINE - Academic

Applied Social Sciences Index and Abstracts (ASSIA)
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Public Health
EISSN 1873-5134
EndPage 345
ExternalDocumentID 21316896
25927078
10_1016_j_pec_2011_01_004
S0738399111000164
1_s2_0_S0738399111000164
Genre Evaluation Studies
Journal Article
GroupedDBID ---
--K
--M
-ET
.1-
.FO
.GJ
.~1
0R~
123
186
1B1
1P~
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5RE
5VS
7-5
71M
8P~
9JM
AABNK
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AATTM
AAWTL
AAXKI
AAXUO
AAYWO
ABBQC
ABFNM
ABIVO
ABJNI
ABMAC
ABMZM
ABWVN
ABXDB
ACDAQ
ACGFS
ACHQT
ACIEU
ACIUM
ACLOT
ACNCT
ACRLP
ACRPL
ACVFH
ADBBV
ADCNI
ADEZE
ADMUD
ADNMO
AEBSH
AEIPS
AEKER
AENEX
AEUPX
AFJKZ
AFPUW
AFRHN
AFTJW
AFXIZ
AGHFR
AGQPQ
AGUBO
AGYEJ
AHHHB
AIEXJ
AIGII
AIIUN
AIKHN
AITUG
AJRQY
AJUYK
AKBMS
AKRWK
AKYEP
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
ANZVX
APXCP
ASPBG
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
BNPGV
CS3
DU5
EBS
EFJIC
EFKBS
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
HEA
HMK
HMO
HVGLF
HZ~
H~9
IHE
J1W
KOM
LZ2
M29
M41
MO0
N9A
O-L
O9-
OAUVE
OHT
OZT
P-8
P-9
P2P
PC.
Q38
R2-
ROL
RPZ
SAE
SCC
SCU
SDF
SDG
SDP
SEL
SES
SEW
SPCBC
SSH
SSZ
T5K
UV1
WH7
WUQ
XPP
YNT
Z5R
ZGI
~G-
~HD
AACTN
AFKWA
AJOXV
AMFUW
PKN
RIG
AAIAV
ABLVK
ABYKQ
AJBFU
LCYCR
9DU
AAYXX
CITATION
AGCQF
AGRNS
IQODW
CGR
CUY
CVF
ECM
EIF
NPM
7X8
7QJ
ID FETCH-LOGICAL-c519t-1e21b503ec4668c9eb1c77d0abe967be2c8509aa5998b5f94c5e01a96edd5323
ISICitedReferencesCount 23
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000288837800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0738-3991
1873-5134
IngestDate Thu Oct 02 08:42:58 EDT 2025
Sat Sep 27 17:06:03 EDT 2025
Mon Jul 21 05:50:18 EDT 2025
Mon Jul 21 09:17:26 EDT 2025
Tue Nov 18 22:22:19 EST 2025
Sat Nov 29 06:06:10 EST 2025
Fri Feb 23 02:28:58 EST 2024
Sun Feb 23 10:18:47 EST 2025
Tue Oct 14 19:24:58 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Keywords Sensitivity and specificity
Intraclass correlation coefficient
Inter-rater study
VR-CoDES
Kappa
Statistical data
Evaluation
Coefficient
Physician patient relation
Computerized processing
Reliability
Statistics
Data gathering
Communication
Language English
License https://www.elsevier.com/tdm/userlicense/1.0
CC BY 4.0
Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c519t-1e21b503ec4668c9eb1c77d0abe967be2c8509aa5998b5f94c5e01a96edd5323
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
PMID 21316896
PQID 854379311
PQPubID 23479
PageCount 5
ParticipantIDs proquest_miscellaneous_870996376
proquest_miscellaneous_854379311
pubmed_primary_21316896
pascalfrancis_primary_25927078
crossref_citationtrail_10_1016_j_pec_2011_01_004
crossref_primary_10_1016_j_pec_2011_01_004
elsevier_sciencedirect_doi_10_1016_j_pec_2011_01_004
elsevier_clinicalkeyesjournals_1_s2_0_S0738399111000164
elsevier_clinicalkey_doi_10_1016_j_pec_2011_01_004
PublicationCentury 2000
PublicationDate 2011-03-01
PublicationDateYYYYMMDD 2011-03-01
PublicationDate_xml – month: 03
  year: 2011
  text: 2011-03-01
  day: 01
PublicationDecade 2010
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
– name: Ireland
PublicationTitle Patient education and counseling
PublicationTitleAlternate Patient Educ Couns
PublicationYear 2011
Publisher Elsevier Ireland Ltd
Elsevier
Publisher_xml – name: Elsevier Ireland Ltd
– name: Elsevier
References Streiner, Norman (bib0040) 2008
Streiner, Norman (bib0030) 2008
Altman (bib0075) 1991
Campbell, Machin (bib0080) 1999
Bakeman, Gottman (bib0025) 1997
Cohen (bib0045) 1968; 70
Brennan, Prediger (bib0055) 1981; 41
Barko (bib0060) 1966; 19
McGraw, Wong (bib0065) 1996; 1
Eide, Eide, Rustoen, Finset (bib0100) 2010
Cohen (bib0035) 1960; 20
Shrout, Fleiss (bib0070) 1979; 86
Berk (bib0105) 1979; 83
Fleiss (bib0050) 1971; 76
Muller, Buttner (bib0020) 1994; 13
Zimmermann, Del Piccolo, Bensing, Bergvik, De Haes, Eide (bib0090) 2010
Traub (bib0015) 1994
.
Landis, Koch (bib0110) 1977; 33
Mitchell (bib0005) 1979; 86
Dunn (bib0010) 2004
Dunn (bib0115) 1989
Shrout (bib0120) 1998; 7
Rothman, Greenland, Lash (bib0085) 2008
Zimmermann C, Del Piccolo L, Finset A, Verona Network Verona Coding Definitions of Emotional Sequences (VR-CoDES). Cue and Concern Manual European Association for Communication in Health Care; 2009. Available from
Brennan (10.1016/j.pec.2011.01.004_bib0055) 1981; 41
Dunn (10.1016/j.pec.2011.01.004_bib0115) 1989
Cohen (10.1016/j.pec.2011.01.004_bib0035) 1960; 20
Fleiss (10.1016/j.pec.2011.01.004_bib0050) 1971; 76
Mitchell (10.1016/j.pec.2011.01.004_bib0005) 1979; 86
10.1016/j.pec.2011.01.004_bib0095
Shrout (10.1016/j.pec.2011.01.004_bib0120) 1998; 7
Dunn (10.1016/j.pec.2011.01.004_bib0010) 2004
Cohen (10.1016/j.pec.2011.01.004_bib0045) 1968; 70
Altman (10.1016/j.pec.2011.01.004_bib0075) 1991
Rothman (10.1016/j.pec.2011.01.004_bib0085) 2008
Muller (10.1016/j.pec.2011.01.004_bib0020) 1994; 13
McGraw (10.1016/j.pec.2011.01.004_bib0065) 1996; 1
Traub (10.1016/j.pec.2011.01.004_bib0015) 1994
Bakeman (10.1016/j.pec.2011.01.004_bib0025) 1997
Barko (10.1016/j.pec.2011.01.004_bib0060) 1966; 19
Zimmermann (10.1016/j.pec.2011.01.004_bib0090) 2010
Shrout (10.1016/j.pec.2011.01.004_bib0070) 1979; 86
Berk (10.1016/j.pec.2011.01.004_bib0105) 1979; 83
Streiner (10.1016/j.pec.2011.01.004_bib0030) 2008
Streiner (10.1016/j.pec.2011.01.004_bib0040) 2008
Eide (10.1016/j.pec.2011.01.004_bib0100) 2010
Landis (10.1016/j.pec.2011.01.004_bib0110) 1977; 33
Campbell (10.1016/j.pec.2011.01.004_bib0080) 1999
References_xml – year: 2010
  ident: bib0090
  article-title: Coding patient emotional cues and concerns in medical consultations: the verona coding definitions of emotional sequences (VR-CoDES)
  publication-title: Patient Educ Couns
– year: 2008
  ident: bib0030
  article-title: Health measurement scales
– reference: Zimmermann C, Del Piccolo L, Finset A, Verona Network Verona Coding Definitions of Emotional Sequences (VR-CoDES). Cue and Concern Manual European Association for Communication in Health Care; 2009. Available from:
– volume: 33
  start-page: 159
  year: 1977
  end-page: 174
  ident: bib0110
  article-title: The measurement of observer agreement for categorical data
  publication-title: Biometrics
– year: 2010
  ident: bib0100
  article-title: Patient validation of cues and concerns identified according to Verona coding definitions of emotional sequences (VR-CoDES): a video- and interview-based approach
  publication-title: Patient Educ Couns
– year: 2008
  ident: bib0040
  article-title: Biostatistics: the bare essentials
– year: 1999
  ident: bib0080
  article-title: Medical statistics: a commonsense approach
– year: 1994
  ident: bib0015
  article-title: Reliability for the social sciences: theory and applications
– volume: 70
  start-page: 213
  year: 1968
  end-page: 220
  ident: bib0045
  article-title: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit
  publication-title: Psychol Bull
– year: 1997
  ident: bib0025
  article-title: Observing interaction: an introduction to sequential analysis
– volume: 20
  start-page: 37
  year: 1960
  end-page: 46
  ident: bib0035
  article-title: A coefficient of agreement for nominal scales
  publication-title: Educ Psychol Meas
– volume: 41
  start-page: 687
  year: 1981
  end-page: 699
  ident: bib0055
  article-title: Coefficient kappa: some uses, misuses, and alternatives
  publication-title: Educ Psychol Meas
– volume: 19
  start-page: 3
  year: 1966
  end-page: 11
  ident: bib0060
  article-title: The intraclass correlation coefficient as a measure of reliability
  publication-title: Psychol Rep
– year: 2004
  ident: bib0010
  article-title: Statistical evaluation of measurement errors
– volume: 7
  start-page: 301
  year: 1998
  end-page: 317
  ident: bib0120
  article-title: Measurement reliability and agreement in psychiatry
  publication-title: Stat Methods Med Res
– volume: 76
  start-page: 378
  year: 1971
  end-page: 382
  ident: bib0050
  article-title: Measuring nominal scale agreement among many raters
  publication-title: Psychol Bull
– year: 2008
  ident: bib0085
  article-title: Modern epidemiology
– volume: 1
  start-page: 30
  year: 1996
  end-page: 46
  ident: bib0065
  article-title: Forming inferences about some intraclass correlation coefficients
  publication-title: Psychol Methods
– volume: 86
  start-page: 376
  year: 1979
  end-page: 390
  ident: bib0005
  article-title: Interobserver agreement, reliability, and generalisability of data collected in observational studies
  publication-title: Psychol Bull
– volume: 86
  start-page: 420
  year: 1979
  end-page: 428
  ident: bib0070
  article-title: Intraclass correlations: uses in assessing rater reliability
  publication-title: Psychol Bull
– year: 1991
  ident: bib0075
  article-title: Practical statistics for medical research
– volume: 13
  start-page: 2465
  year: 1994
  end-page: 2476
  ident: bib0020
  article-title: A critical discussion of intraclass correlation coefficients
  publication-title: Stat Med
– volume: 83
  start-page: 460
  year: 1979
  end-page: 472
  ident: bib0105
  article-title: Generalizability of behavioural observation: a clarification of interobserver agreement and interobserver reliability
  publication-title: Am J Ment Def
– year: 1989
  ident: bib0115
  article-title: Design and analysis of reliability studies: the statistical evaluation of measurement errors
– reference: .
– volume: 33
  start-page: 159
  year: 1977
  ident: 10.1016/j.pec.2011.01.004_bib0110
  article-title: The measurement of observer agreement for categorical data
  publication-title: Biometrics
  doi: 10.2307/2529310
– volume: 83
  start-page: 460
  year: 1979
  ident: 10.1016/j.pec.2011.01.004_bib0105
  article-title: Generalizability of behavioural observation: a clarification of interobserver agreement and interobserver reliability
  publication-title: Am J Ment Def
– volume: 7
  start-page: 301
  year: 1998
  ident: 10.1016/j.pec.2011.01.004_bib0120
  article-title: Measurement reliability and agreement in psychiatry
  publication-title: Stat Methods Med Res
  doi: 10.1191/096228098672090967
– volume: 76
  start-page: 378
  year: 1971
  ident: 10.1016/j.pec.2011.01.004_bib0050
  article-title: Measuring nominal scale agreement among many raters
  publication-title: Psychol Bull
  doi: 10.1037/h0031619
– volume: 13
  start-page: 2465
  year: 1994
  ident: 10.1016/j.pec.2011.01.004_bib0020
  article-title: A critical discussion of intraclass correlation coefficients
  publication-title: Stat Med
  doi: 10.1002/sim.4780132310
– year: 1999
  ident: 10.1016/j.pec.2011.01.004_bib0080
– volume: 86
  start-page: 420
  year: 1979
  ident: 10.1016/j.pec.2011.01.004_bib0070
  article-title: Intraclass correlations: uses in assessing rater reliability
  publication-title: Psychol Bull
  doi: 10.1037/0033-2909.86.2.420
– year: 1989
  ident: 10.1016/j.pec.2011.01.004_bib0115
– year: 1997
  ident: 10.1016/j.pec.2011.01.004_bib0025
– year: 2004
  ident: 10.1016/j.pec.2011.01.004_bib0010
– volume: 86
  start-page: 376
  year: 1979
  ident: 10.1016/j.pec.2011.01.004_bib0005
  article-title: Interobserver agreement, reliability, and generalisability of data collected in observational studies
  publication-title: Psychol Bull
  doi: 10.1037/0033-2909.86.2.376
– year: 1991
  ident: 10.1016/j.pec.2011.01.004_bib0075
– volume: 20
  start-page: 37
  year: 1960
  ident: 10.1016/j.pec.2011.01.004_bib0035
  article-title: A coefficient of agreement for nominal scales
  publication-title: Educ Psychol Meas
  doi: 10.1177/001316446002000104
– year: 2010
  ident: 10.1016/j.pec.2011.01.004_bib0100
  article-title: Patient validation of cues and concerns identified according to Verona coding definitions of emotional sequences (VR-CoDES): a video- and interview-based approach
  publication-title: Patient Educ Couns
– volume: 70
  start-page: 213
  year: 1968
  ident: 10.1016/j.pec.2011.01.004_bib0045
  article-title: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit
  publication-title: Psychol Bull
  doi: 10.1037/h0026256
– ident: 10.1016/j.pec.2011.01.004_bib0095
– year: 2008
  ident: 10.1016/j.pec.2011.01.004_bib0030
– volume: 19
  start-page: 3
  year: 1966
  ident: 10.1016/j.pec.2011.01.004_bib0060
  article-title: The intraclass correlation coefficient as a measure of reliability
  publication-title: Psychol Rep
  doi: 10.2466/pr0.1966.19.1.3
– volume: 1
  start-page: 30
  year: 1996
  ident: 10.1016/j.pec.2011.01.004_bib0065
  article-title: Forming inferences about some intraclass correlation coefficients
  publication-title: Psychol Methods
  doi: 10.1037/1082-989X.1.1.30
– year: 2008
  ident: 10.1016/j.pec.2011.01.004_bib0040
– year: 1994
  ident: 10.1016/j.pec.2011.01.004_bib0015
– year: 2010
  ident: 10.1016/j.pec.2011.01.004_bib0090
  article-title: Coding patient emotional cues and concerns in medical consultations: the verona coding definitions of emotional sequences (VR-CoDES)
  publication-title: Patient Educ Couns
– year: 2008
  ident: 10.1016/j.pec.2011.01.004_bib0085
– volume: 41
  start-page: 687
  year: 1981
  ident: 10.1016/j.pec.2011.01.004_bib0055
  article-title: Coefficient kappa: some uses, misuses, and alternatives
  publication-title: Educ Psychol Meas
  doi: 10.1177/001316448104100307
SSID ssj0006641
Score 2.0888014
Snippet To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication...
Abstract Objective To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered...
Objective: To investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in...
SourceID proquest
pubmed
pascalfrancis
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 341
SubjectTerms Biological and medical sciences
Clinical Coding - methods
Clinical Coding - standards
Clinical Competence - standards
Coding
Communication
Doctor-Patient communication
Guidance
Humans
Inter-rater study
Internal Medicine
Intraclass correlation coefficient
Kappa
Medical sciences
Miscellaneous
Observer Variation
Physician-Patient Relations
Public health. Hygiene
Public health. Hygiene-occupational medicine
Reliability
Reproducibility of Results
Research Design
Sensitivity
Sensitivity analysis
Sensitivity and specificity
Videotape Recording
VR-CoDES
Title When coders are reliable: The application of three measures to assess inter-rater reliability/agreement with doctor–patient communication data coded with the VR-CoDES
URI https://www.clinicalkey.com/#!/content/1-s2.0-S0738399111000164
https://www.clinicalkey.es/playcontent/1-s2.0-S0738399111000164
https://dx.doi.org/10.1016/j.pec.2011.01.004
https://www.ncbi.nlm.nih.gov/pubmed/21316896
https://www.proquest.com/docview/854379311
https://www.proquest.com/docview/870996376
Volume 82
WOSCitedRecordID wos000288837800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1873-5134
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006641
  issn: 0738-3991
  databaseCode: AIEXJ
  dateStart: 19950201
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3LbtNAFB2FFCGkCkF5hUc0C1ZEBr_tYVeVIorUqIIIZTca2xMpVUgiO6lKV_wDP8FHseJLuHcejlOaQBdIkRVZnomde3znzsy55xLyYoQacl6UOIVghRMWkXRYFkgnzQoYrGRQJJ5QxSaSfj8dDtlJq_XT5sKcTZLpND0_Z_P_amo4B8bG1NlrmLvuFE7AdzA6HMHscPwnw4N7RWo5EpR7SOsq5WSM-VGWX9HYstYEgVLK3he9VKjkHoTaB1ZCEqWDShKl6QNptF_Rq8McXa0q6lXcYqYW_g1rIjBKrUhWX6We9JCJqu7KsN0x3v380TmYvTXhqAmQT0xraYknJvHOZs5fhbejFcKPxcXF2OQgjTFvYlIPO_0lEuK1a1NVzsdibc3Da5C-jGtMwE1DaOU1_XjqN_AaNJxyoKW1zPgeaPnKP4YOvYpx-mouc6PsCh9dGnldpvvS8FmTGi1f7pRDFxy74C58UKx2x8c3ok129o8Ohx_qSCGOVWXV-mnsrrviH166j01x0-5cVPA2j3QZls3zJBUvDe6SO2aiQ_c1QO-RlpzukVvHhsqxR3b1gjHVeXD3yQ_ELdW4pYBbanH7hgJqaQO1dDaiCrXUopYuZlSjljZQSxuofV1jliL8qMbsr2_fDVrpGlopolXdS6EvB7RSi9YHZPDucHDw3jFlRJwcpicLx5O-l0VuIPMwjtOcQXSSJ0nhikyyOMmkn6cQNQsRMZZm0YiFeSRdT7BYFkUU-MFD0p7OpvIxoaEQPjg2z81ZDhOdMJXhKMiD1PULkaVh2CGutRHPjcQ-VnqZ8I3Y6JCXdZO51pfZdrFvDc9t4jQM9RwgvK1RclUjWRmfVXGPVz53-SfEIcIQVSRRea9Dwrqlicd1nP23H-yuYbJ-Lj9iPoqLdQi1IOUwmOEOpZjK2bLiaYTyqIHnbbkkgTltDGFRhzzS-F7172EVPhY_uc5_-pTcXnmYZ6S9KJfyObmZny3GVdklN5Jh2jXv7W8fGDJH
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=When+coders+are+reliable%3A+The+application+of+three+measures+to+assess+inter-rater+reliability%2Fagreement+with+doctor%E2%80%93patient+communication+data+coded+with+the+VR-CoDES&rft.jtitle=Patient+education+and+counseling&rft.au=Fletcher%2C+Ian&rft.au=Mazzi%2C+Mariangela&rft.au=Nuebling%2C+Matthias&rft.date=2011-03-01&rft.issn=0738-3991&rft.volume=82&rft.issue=3&rft.spage=341&rft.epage=345&rft_id=info:doi/10.1016%2Fj.pec.2011.01.004&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_pec_2011_01_004
thumbnail_m http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fcdn.clinicalkey.com%2Fck-thumbnails%2F07383991%2FS0738399111X00022%2Fcov150h.gif