Testing equivalence with repeated measures: tests of the difference model of two-alternative forced-choice performance

Span J Psychol. 2011 Nov;14(2):1023-49. doi: 10.5209/rev_sjop.2011.v14.n2.48.

Abstract

Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Analysis of Variance
  • Choice Behavior*
  • Computer Simulation
  • Humans
  • Likelihood Functions
  • Linear Models
  • Mathematical Computing*
  • Models, Statistical
  • Probability
  • Psychological Tests / statistics & numerical data*
  • Psychometrics / statistics & numerical data*
  • Reproducibility of Results
  • Signal Detection, Psychological*
  • Software*