Fulcrum: condensing redundant reads from high-throughput sequencing studies

Bioinformatics. 2012 May 15;28(10):1324-7. doi: 10.1093/bioinformatics/bts123. Epub 2012 Mar 13.

Abstract

Motivation: Ultra-high-throughput sequencing produces duplicate and near-duplicate reads, which can consume computational resources in downstream applications. A tool that collapses such reads should reduce storage and assembly complications and costs.

Results: We developed Fulcrum to collapse identical and near-identical Illumina and 454 reads (such as those from PCR clones) into single error-corrected sequences; it can process paired-end as well as single-end reads. Fulcrum is customizable and can be deployed on a single machine, a local network or a commercially available MapReduce cluster, and it has been optimized to maximize ease-of-use, cross-platform compatibility and future scalability. Sequence datasets have been collapsed by up to 71%, and the reduced number and improved quality of the resulting sequences allow assemblers to produce longer contigs while using less memory.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms
  • Gene Expression Profiling
  • High-Throughput Nucleotide Sequencing / methods*
  • Humans
  • Pseudomonas / genetics
  • Sequence Analysis, DNA / methods
  • Software*