Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Jul;8(7):3737-45.
doi: 10.1021/pr801109k.

Improvements to the Percolator Algorithm for Peptide Identification From Shotgun Proteomics Data Sets

Affiliations
Free PMC article

Improvements to the Percolator Algorithm for Peptide Identification From Shotgun Proteomics Data Sets

Marina Spivak et al. J Proteome Res. .
Free PMC article

Abstract

Shotgun proteomics coupled with database search software allows the identification of a large number of peptides in a single experiment. However, some existing search algorithms, such as SEQUEST, use score functions that are designed primarily to identify the best peptide for a given spectrum. Consequently, when comparing identifications across spectra, the SEQUEST score function Xcorr fails to discriminate accurately between correct and incorrect peptide identifications. Several machine learning methods have been proposed to address the resulting classification task of distinguishing between correct and incorrect peptide-spectrum matches (PSMs). A recent example is Percolator, which uses semisupervised learning and a decoy database search strategy to learn to distinguish between correct and incorrect PSMs identified by a database search algorithm. The current work describes three improvements to Percolator. (1) Percolator's heuristic optimization is replaced with a clear objective function, with intuitive reasons behind its choice. (2) Tractable nonlinear models are used instead of linear models, leading to improved accuracy over the original Percolator. (3) A method, Q-ranker, for directly optimizing the number of identified spectra at a specified q value is proposed, which achieves further gains.

Figures

Figure 1
Figure 1. Three types of loss function
Each panel plots the loss as a function of the difference in the true and predicted label. The squared loss L(f(x), y) = (f(x) − y)2 is often used in regression problems, but also in classification [22]. The hinge loss L(f(x), y) = max(0, 1 − yf(x)) is used as a convex approximation to the zero-one loss in support vector machines [8]. The sigmoid loss L(f(x), y) = 1/exp(1 + f(x)) is perhaps less commonly used, but is discussed in, e.g., [23, 27].
Figure 2
Figure 2. Comparison of loss functions
Each panel plots the number of accepted PSMs for the yeast (A) training set and (B) test set as a function of the q value threshold. Each series corresponds to one of the three loss functions shown in Figure 1, with series for Percolator and SEQUEST included for comparison.
Figure 3
Figure 3. “Cutting” the hinge loss makes a sigmoid-like loss called the ramp loss
Making the hinge loss have zero gradient when z = yif (x) < s for some chosen value s effectively makes a piece-wise linear version of a sigmoid function.
Figure 4
Figure 4. Comparison of Percolator, direct classification and Q-ranker
The figure plots the number of accepted PSMs as a function of q value threshold for the yeast data set. Each series corresponds to a different ranking algorithm, including Percolator as well as linear and nonlinear versions of the direct classification algorithm and Q-ranker. The nonlinear methods use 5 hidden units.
Figure 5
Figure 5. Comparison of training optimization methods (iteration vs. error rate)
The Q-ranker optimization starts from the best result of direct optimization achieved during the course of training and continues for a further 300 iterations. These results are on the training set. Note that for each q value choice, Q-ranker improves the training error over the best result from the classification algorithm.
Figure 6
Figure 6. Comparison of PeptideProphet, Percolator and Q-ranker on four data sets
Each panel plots the number of accepted target PSMs as a function of q value. The series correspond to the three different algorithms, including two variants of Q-ranker that use 17 features and 37 features.

Similar articles

See all similar articles

Cited by 87 articles

See all "Cited by" articles

Publication types

Feedback