Introduction: Missing values occur in nearly all clinical studies, despite the best efforts of the investigators, and cause frequently unrecognised biases. Our aims were (1) to assess the reporting and handling of missing values in the critical care literature; (2) to describe the impact of various techniques for handling missing values on the study results; (3) to provide guidance on the management of clinical study analysis in case of missing data.
Methods: We reviewed 44 published manuscripts in three critical care research journals. We used the Conflicus study database to illustrate how to handle missing values.
Results: Among 44 published manuscripts, 16 (36.4 %) provided no information on whether missing data occurred, 6 (13.6 %) declared having no missing data, 20 (45.5 %) reported that missing values occurred but did not handle them and only 2 (4.5 %) used sophisticated missing data handling methods. In our example using the Conflicus study database, we evaluated correlations linking job strain intensity to the type and proportion of missing values. Overall, 8 % of data were missing; however, using only complete cases would have resulted in discarding 24 % of the questionnaires. A greater number and a higher percentage of missing values for a particular variable were significantly associated with a lower job strain score (indicating greater stress). Among respondents who fully completed the job strain questionnaire, the comparison of those whose questionnaires did and did not have missing values showed significant differences in terms of age, number of children and country of birth. We provided an algorithm to manage clinical studies analysis in case of missing data.
Conclusion: Missing data are common and generate interpretation biases. They should be reported routinely and taken into account when modelling data from clinical studies.