Home > Research > Publications & Outputs > When coders are reliable: the application of th...

Associated organisational unit

View graph of relations

When coders are reliable: the application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

When coders are reliable: the application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES. / Fletcher, Ian; Mazzi, M; Nuebling, M.
In: Patient Education and Counseling, Vol. 82, No. 3, 03.2011, p. 341-345.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Author

Bibtex

@article{27c1040fdcf4433ea78f22652a04e766,
title = "When coders are reliable: the application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES.",
abstract = "ObjectiveTo investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments.MethodsThe raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC).ResultsMinor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates.ConclusionIt is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC.Practice implicationAlternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.",
keywords = "Inter-rater study, Kappa , Intraclass correlation coefficient , Sensitivity and specificity , VR-CoDES",
author = "Ian Fletcher and M Mazzi and M Nuebling",
year = "2011",
month = mar,
doi = "10.1016/j.pec.2011.01.004",
language = "English",
volume = "82",
pages = "341--345",
journal = "Patient Education and Counseling",
issn = "0738-3991",
publisher = "Elsevier Ireland Ltd",
number = "3",

}

RIS

TY - JOUR

T1 - When coders are reliable: the application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES.

AU - Fletcher, Ian

AU - Mazzi, M

AU - Nuebling, M

PY - 2011/3

Y1 - 2011/3

N2 - ObjectiveTo investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments.MethodsThe raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC).ResultsMinor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates.ConclusionIt is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC.Practice implicationAlternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.

AB - ObjectiveTo investigate whether different measures of inter-rater reliability will compute similar estimates with nominal data commonly encountered in communication studies. To make recommendations how reliability should be computed and described for communication coding instruments.MethodsThe raw data from an inter-rater study with three coders were analysed with; Cohen's κ, sensitivity and specificity measures, Fleiss's multirater κj, and an intraclass correlation coefficient (ICC).ResultsMinor differences were found between Cohen's κ and an ICC model across paired data (largest margin = 0.01). There were negligible differences between the multirater estimates e.g. κj (0.52) and ICC (0.53). Sensitivity analyses were in general agreement with the multirater estimates.ConclusionIt is more practical to analyse nominal data with >2 raters with an appropriate model ICC for inter-rater studies, and little difference exists between Cohen's κ or an ICC.Practice implicationAlternatives to Cohen's κ are readily available, but researchers need to be aware of the different ICC definitions. An ICC model should be fully described in reports. Investigators are encouraged to supply confidence limits with inter-rater data, and to revisit guidance regarding the relative strengths of agreement of reliability coefficients.

KW - Inter-rater study

KW - Kappa

KW - Intraclass correlation coefficient

KW - Sensitivity and specificity

KW - VR-CoDES

U2 - 10.1016/j.pec.2011.01.004

DO - 10.1016/j.pec.2011.01.004

M3 - Journal article

VL - 82

SP - 341

EP - 345

JO - Patient Education and Counseling

JF - Patient Education and Counseling

SN - 0738-3991

IS - 3

ER -