Assessment of quality control approaches for metagenomic data analysis

Sci Rep. 2014 Nov 7;4:6957. doi: 10.1038/srep06957.


Currently there is an explosive increase of the next-generation sequencing (NGS) projects and related datasets, which have to be processed by Quality Control (QC) procedures before they could be utilized for omics analysis. QC procedure usually includes identification and filtration of sequencing artifacts such as low-quality reads and contaminating reads, which would significantly affect and sometimes mislead downstream analysis. Quality control of NGS data for microbial communities is especially challenging. In this work, we have evaluated and compared the performance and effects of various QC pipelines on different types of metagenomic NGS data and from different angles, based on which general principles of using QC pipelines were proposed. Results based on both simulated and real metagenomic datasets have shown that: firstly, QC-Chain is superior in its ability for contamination identification for metagenomic NGS datasets with different complexities with high sensitivity and specificity. Secondly, the high performance computing engine enabled QC-Chain to achieve a significant reduction in processing time compared to other pipelines based on serial computing. Thirdly, QC-Chain could outperform other tools in benefiting downstream metagenomic data analysis.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Computational Biology / methods
  • Computational Biology / standards*
  • Datasets as Topic
  • High-Throughput Nucleotide Sequencing / standards*
  • Humans
  • Metagenomics / methods
  • Metagenomics / standards*
  • Plants / genetics
  • Quality Control
  • Sensitivity and Specificity
  • Sequence Analysis, DNA
  • Software*