There is a general move towards greater emphasis on point and interval estimates of treatment effect in reporting of clinical trials, so that significance testing plays a lesser role. In this article we examine a number of issues which affect the use and interpretation of conventional estimation methods. Should we accept or avoid the stereotypes of 95 per cent confidence? Should the abstract of a trial report include confidence intervals for major endpoints? Are frequentist confidence intervals being interpreted correctly, and should Bayesian probability intervals be more widely used in trial reports? Does the timing of publication, such as early stopping because of a large observed treatment difference, lead to exaggerated point and interval estimates? How can we produce realistic estimates from subgroup analyses? Is publication bias seriously affecting our ability to obtain unbiased estimates? Is the emphasis on estimation methods a powerful tool for encouraging larger sample sizes? Can we resolve the controversy concerning fixed or random effects models for estimation in overviews of related trials? Our arguments are illustrated by results from recent trials in cardiovascular disease.