Study design: A prospective and retrospective cross-sectional cohort analysis.
Objective: The aim of this study was to show that Patient-Reported Outcomes Measurement Information System (PROMIS) computer adaptive testing (CAT) assessments for physical function and pain interference can be efficiently collected in a standard office visit and to evaluate these scores with scores from previously validated Oswestry Disability Index (ODI) and Neck Disability Index (NDI) providing evidence of convergent validity for use in patients with spine pathology.
Summary of background data: Spinal surgery outcomes are highly variable, and substantial debate continues regarding the role and value of spine surgery. The routine collection of patient-based outcomes instruments in spine surgery patients may inform this debate. Traditionally, the inefficiency associated with collecting standard validated instruments has been a barrier to routine use in outpatient clinics. We utilized several CAT instruments available through PROMIS and correlated these with the results obtained using "gold standard" legacy outcomes measurement instruments.
Methods: All measurements were collected at a routine clinical visit. The ODI and the NDI assessments were used as "gold standard" comparisons for patient-reported outcomes.
Results: PROMIS CAT instruments required 4.5 ± 1.8 questions and took 35 ± 16 seconds to complete, compared with ODI/NDI requiring 10 questions and taking 188 ± 85 seconds when administered electronically. Linear regression analysis of retrospective scores involving a primary back complaint revealed moderate to strong correlations between ODI and PROMIS physical function with r values ranging from 0.5846 to 0.8907 depending on the specific assessment and patient subsets examined.
Conclusion: Routine collection of physical function outcome measures in clinical practice offers the ability to inform and improve patient care. We have shown that several PROMIS CAT instruments can be efficiently administered during routine clinical visits. The moderate to strong correlations found validate the utility of computer adaptive testing when compared with the gold standard "static" legacy assessments.
Level of evidence: 4.