Background: In response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations.
Methods: We developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using four federated Common Data Models. N3C Data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements.
Results: Beyond well-recognized DQ findings, we discovered 15 heuristics relating to source CDM conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback.
Discussion: We encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for data quality improvement that will support improved research analytics locally and in aggregate.
Conclusion: By combining rapid, continual assessment of DQ with a large volume of multi-site data, it is possible to support more nuanced scientific questions with the scale and rigor that they require.
Keywords: COVID-19; Data accuracy; Electronic Health Records.
© The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association.