Objective: There is no consensus on the best method to determine the minimal important change (MIC) of patient-reported outcomes. Recent publications recommend the use of multiple methods. Our aim was to assess whether different methods lead to consistent values for the MIC.
Study design and setting: We used two commonly used anchor-based methods and three commonly used distribution-based methods to determine the MIC of the subscales: pain and physical functioning of the Western Ontario and McMaster University Osteoarthritis Index questionnaire in five different studies involving patients with hip or knee complaints. We repeated the anchor-based methods using relative change scores, to adjust for baseline scores.
Results: We found large variation in MIC values by the same method across studies and across different methods within studies. We consider it unlikely that this variation can be explained by differences between disease groups, disease severity, or lengths of follow-up. The variation persisted when using relative change scores. It was not possible to conclude whether this variation is because of true differences in MIC values between populations or to conceptual and methodological problems of the MIC methods.
Conclusion: To better disentangle these two possible explanations, the MIC methodology should be improved and standardized. In the meantime, caution is needed when interpreting and using published MIC values.