Measuring and Reporting Intercoder Reliability in Plan Quality Evaluation Research

Plan quality evaluation researchers typically evaluate plans in relation to whether they contain certain desirable features. Best practice dictates that plans be evaluated by at least two readers and that researchers report a measure of the extent to which the readers agree on whether the plans cont...

Full description

Saved in:
Bibliographic Details
Published in:Journal of planning education and research Vol. 34; no. 1; pp. 77 - 93
Main Authors: Stevens, Mark R., Lyles, Ward, Berke, Philip R.
Format: Journal Article
Language:English
Published: Los Angeles, CA SAGE Publications 01.03.2014
Sage Publications Ltd
Subjects:
ISSN:0739-456X, 1552-6577
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Plan quality evaluation researchers typically evaluate plans in relation to whether they contain certain desirable features. Best practice dictates that plans be evaluated by at least two readers and that researchers report a measure of the extent to which the readers agree on whether the plans contain the desirable features. Established practice for assessing this agreement has been subject to criticism. We summarize this criticism, discuss an alternative approach to assessing agreement, and provide recommendations for plan quality evaluation researchers to follow to improve the quality of their data and the manner in which they assess and report that quality.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0739-456X
1552-6577
DOI:10.1177/0739456X13513614