Background: During systematic reviews, "data abstraction" refers to the process of collecting data from reports of studies. The data abstractors' level of experience may affect the accuracy of data abstracted. Using data from a randomized crossover trial in which different data abstraction approaches were compared, we examined the association between abstractors' level of experience and accuracy of data abstraction.
Methods: We classified abstractors as "more experienced" if they had authored three or more published systematic reviews, and "less experienced" otherwise. Each abstractor abstracted data related to study design, baseline characteristics, and outcomes/results from six articles. We considered two types of errors: incorrect abstraction and errors of omission. We estimated the proportion of errors by level of experience using a binomial generalized linear mixed model.
Results: We used data from 25 less experienced and 25 more experienced data abstractors. Overall error proportions were similar for less experienced abstractors (21%) and more experienced abstractors (19%). Compared with less experienced abstractors, more experienced abstractors had a lower odds of errors for data items related to outcomes/results (adjusted odds ratio [OR] = 0.53; 95% CI, 0.34-0.82) and potentially for data items related to study design (adjusted OR = 0.83; 95% CI, 0.64-1.09) but a potentially higher odds of errors for items related to baseline characteristics (adjusted OR = 1.42; 95% CI, 0.97-2.06).
Conclusion: Experience of data abstraction matters little. Errors are reduced by adjudication but still remain high for data items related to outcomes/results.
Keywords: accuracy; adjudication; data abstraction; errors; experience; systematic review.
© 2020 John Wiley & Sons, Ltd.