Background: In the hierarchy of research designs, the results of randomized, controlled trials are considered to be evidence of the highest grade, whereas observational studies are viewed as having less validity because they reportedly overestimate treatment effects. We used published meta-analyses to identify randomized clinical trials and observational studies that examined the same clinical topics. We then compared the results of the original reports according to the type of research design.
Methods: A search of the Medline data base for articles published in five major medical journals from 1991 to 1995 identified meta-analyses of randomized, controlled trials and meta-analyses of either cohort or case-control studies that assessed the same intervention. For each of five topics, summary estimates and 95 percent confidence intervals were calculated on the basis of data from the individual randomized, controlled trials and the individual observational studies.
Results: For the five clinical topics and 99 reports evaluated, the average results of the observational studies were remarkably similar to those of the randomized, controlled trials. For example, analysis of 13 randomized, controlled trials of the effectiveness of bacille Calmette-Guérin vaccine in preventing active tuberculosis yielded a relative risk of 0.49 (95 percent confidence interval, 0.34 to 0.70) among vaccinated patients, as compared with an odds ratio of 0.50 (95 percent confidence interval, 0.39 to 0.65) from 10 case-control studies. In addition, the range of the point estimates for the effect of vaccination was wider for the randomized, controlled trials (0.20 to 1.56) than for the observational studies (0.17 to 0.84).
Conclusions: The results of well-designed observational studies (with either a cohort or a case-control design) do not systematically overestimate the magnitude of the effects of treatment as compared with those in randomized, controlled trials on the same topic.