Publication and selection biases in meta-analysis are more likely to affect small studies, which also tend to be of lower methodological quality. This may lead to "small-study effects," where the smaller studies in a meta-analysis show larger treatment effects. Small-study effects may also arise because of between-trial heterogeneity. Statistical tests for small-study effects have been proposed, but their validity has been questioned. A set of typical meta-analyses containing 5, 10, 20, and 30 trials was defined based on the characteristics of 78 published meta-analyses identified in a hand search of eight journals from 1993 to 1997. Simulations were performed to assess the power of a weighted regression method and a rank correlation test in the presence of no bias, moderate bias or severe bias. We based evidence of small-study effects on P < 0.1. The power to detect bias increased with increasing numbers of trials. The rank correlation test was less powerful than the regression method. For example, assuming a control group event rate of 20% and no treatment effect, moderate bias was detected with the regression test in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 and 30 trials. The corresponding figures for the correlation test were 8.5%, 14.7%, 20.4% and 26.0%, respectively. Severe bias was detected with the regression method in 23.5%, 56.1%, 88.3% and 95.9% of meta-analyses with 5, 10, 20 and 30 trials, as compared to 11.9%, 31.1%, 45.3% and 65.4% with the correlation test. Similar results were obtained in simulations incorporating moderate treatment effects. However the regression method gave false-positive rates which were too high in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). Using the regression method, evidence of small-study effects was present in 21 (26.9%) of the 78 published meta-analyses. Tests for small-study effects should routinely be performed in meta-analysis. Their power is however limited, particularly for moderate amounts of bias or meta-analyses based on a small number of small studies. When evidence of small-study effects is found, careful consideration should be given to possible explanations for these in the reporting of the meta-analysis.