The objective of most comparative trials is to show a "positive" result whereby one treatment is significantly better than another. However, the motivation behind some trials is to demonstrate a "negative" result that two treatments are equally effective. Such "equivalence" trials usually arise in comparing a new conservative treatment with an effective but more intensive standard therapy that has potential adverse side effects. Retrospective sample-size tables were provided to determine whether a completed study showing no significant difference between treatment effects is large enough to justify a true-negative conclusion. In this article, the sample sizes given in the decision-making tables are compared with those derived using a confidence-interval approach, the method we recommend for interpreting completed trials in order to judge the range of true treatment differences that is reasonably consistent with the observed data. Some implications of this comparison are discussed in respect to the interpretation of negative studies. Selected biostatistical principles involving the proper use of the tables are also presented. Finally, we distinguish between a completed negative study and an equivalence study, which is designed from the onset to demonstrate the comparability of different treatments. Important design considerations and sample-size tables are given for planning equivalence trials. We show that very large numbers of patients are usually needed to establish with a high degree of confidence that two treatments have comparable efficacy.