Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement Purpose The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, ... Tutorial
Newly Published
Tutorial  |   August 02, 2017
Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement
 
Author Affiliations & Notes
  • Rebecca J. Bennett
    Ear Science Institute Australia, Subiaco, Western Australia
    Ear Sciences Centre, The University of Western Australia, Nedlands
  • Dunay S. Taljaard
    Ear Sciences Centre, The University of Western Australia, Nedlands
    Princess Margaret Hospital, Subiaco, Western Australia
  • Michelle Olaithe
    School of Psychological Science, The University of Western Australia
  • Chris Brennan-Jones
    Ear Science Institute Australia, Subiaco, Western Australia
    Ear Sciences Centre, The University of Western Australia, Nedlands
  • Robert H. Eikelboom
    Ear Science Institute Australia, Subiaco, Western Australia
    Ear Sciences Centre, The University of Western Australia, Nedlands
    Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Rebecca Bennett: bec.bennett@earscience.org.au
  • Editor: Sumitrajit Dhar
    Editor: Sumitrajit Dhar×
  • Associate Editor: Ryan McCreery
    Associate Editor: Ryan McCreery×
Article Information
Hearing Aids, Cochlear Implants & Assistive Technology / Research Issues, Methods & Evidence-Based Practice / Newly Published / Tutorial
Tutorial   |   August 02, 2017
Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement
American Journal of Audiology, Newly Published. doi:10.1044/2017_AJA-16-0100
History: Received October 17, 2016 , Revised February 7, 2017 , Accepted March 7, 2017
 
American Journal of Audiology, Newly Published. doi:10.1044/2017_AJA-16-0100
History: Received October 17, 2016; Revised February 7, 2017; Accepted March 7, 2017

Purpose The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen's kappa, Krippendorff's alpha, or interclass correlation).

Methods In this prospective cohort study, 3 clinical audiologists, who were masked to each other's scores, administered the Practical Hearing Aid Skills Test–Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices.

Results The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from “no agreement” to “almost perfect agreement” depending on data types and statistical index selected.

Conclusions This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.

Acknowledgments
Rebecca Jane Bennett is funded by an Australian Postgraduate Award scholarship through the School of Surgery at The University of Western Australia. The authors would like to acknowledge the assistance of the Lions Hearing Clinic with participant recruitment and the participants for devoting their time to this study. The authors would also like to acknowledge the assistance of Liz Rocher, Lize Strachan, Jordan Bishop, and Sandra Nair with data collection and data entry.
Order a Subscription
Pay Per View
Entire American Journal of Audiology content & archive
24-hour access
This Article
24-hour access