The validity of meta-analyses has recently been examined by comparing their results with those of megatrials on the same topic. We investigated the reliability of this gold standard by identifying megatrials, defined as ones involving more than 1000 subjects, in the recent issue of the Cochrane Library and in the article by LeLorier et al. (N Engl J Med 1997;337:536-42). In the former set, 289 pairs of megatrials were identified which studied the same patient-intervention-outcome combinations. Of these, 210 (73%, 95% CI: 67-77%) reported odds ratios or weighted mean differences that were not statistically significantly different from each other. The agreement of statistical conclusions regarding outcomes was a quadratic weighted kappa of 0.40 (95% CI: 0.29-0.51). The article by LeLorier et al. yielded 133 comparisons, of which 97 (73%, 95% CI: 64-79%) reported mutually compatible odds ratios. The agreement of statistical conclusions was a kappa of 0.33 (95% CI: 0.18-0.47). Agreement among megatrials was approximately as large as that reported between meta-analyses and megatrials. These findings suggest that taking megatrials as the gold standard can be problematic and that there is no substitute for clear and hard thinking for any study, be it a meta-analysis or a megatrial.