Inter-rater and test-retest reliability of computerized clinical vestibular tools

J Vestib Res. 2021;31(5):365-373. doi: 10.3233/VES-201522.

Abstract

Background: Clinical vestibular technology is rapidly evolving to improve objective assessments of vestibular function. Understanding the reliability and expected score ranges of emerging clinical vestibular tools is important to gauge how these tools should be used as clinical endpoints.

Objective: The objective of this study was to evaluate inter-rater and test-retest reliability intraclass correlation coefficients (ICCs) of four vestibular tools and to determine expected ranges of scores through smallest real difference (SRD) measures.

Methods: Sixty healthy graduate students completed two 1-hour sessions, at most a week apart, consisting of two video head-impulse tests (vHIT), computerized dynamic visual acuity (cDVA) tests, and a smartphone-assisted bucket test (SA-SVV). Thirty students were tested by different testers at each session (inter-rater) and 30 by the same tester (test-retest). ICCs and SRDs were calculated for both conditions.

Results: Most measures fell within the moderate ICC range (0.50-0.75). ICCs were higher for cDVA in the inter-rater subgroup and higher for vHITs in the test-retest subgroup.

Conclusions: Measures from the four tools evaluated were moderately reliable. There may be a tester effect on reliabilities, specifically vHITs. Further research should repeat these analyses in a patient population and explore methodological differences between vHIT systems.

Keywords: Clinical tools; cDVA; reliability; subjective visual vertical; vHIT.

MeSH terms

  • Head Impulse Test*
  • Humans
  • Reproducibility of Results
  • Vestibule, Labyrinth*
  • Vision Tests
  • Visual Acuity