We consider how to combine several independent studies of the same diagnostic test, where each study reports an estimated false positive rate (FPR) and an estimated true positive rate (TPR). We propose constructing a summary receiver operating characteristic (ROC) curve by the following steps. (i) Convert each FPR to its logistic transform U and each TPR to its logistic transform V after increasing each observed frequency by adding 1/2. (ii) For each study calculate D = V - U, which is the log odds ratio of TPR and FPR, and S = V + U, an implied function of test threshold; then plot each study's point (Si, Di). (iii) Fit a robust-resistant regression line to these points (or an equally weighted least-squares regression line), with V - U as the dependent variable. (iv) Back-transform the line to ROC space. To avoid model-dependent extrapolation from irrelevant regions of ROC space we propose defining a priori a value of FPR so large that the test simply would not be used at that FPR, and a value of TPR so low that the test would not be used at that TPR. Then (a) only data points lying in the thus defined north-west rectangle of the unit square are used in the data analysis, and (b) the estimated summary ROC is depicted only within that subregion of the unit square. We illustrate the methods using simulated and real data sets, and we point to ways of comparing different tests and of taking into account the effects of covariates.