Bayesian inference is a formal method to combine evidence external to a study, represented by a prior probability curve, with the evidence generated by the study, represented by a likelihood function. Because Bayes theorem provides a proper way to measure and to combine study evidence, Bayesian methods can be viewed as a calculus of evidence, not just belief. In this introduction, we explore the properties and consequences of using the Bayesian measure of evidence, the Bayes factor (in its simplest form, the likelihood ratio). The Bayes factor compares the relative support given to two hypotheses by the data, in contrast to the P-value, which is calculated with reference only to the null hypothesis. This comparative property of the Bayes factor, combined with the need to explicitly predefine the alternative hypothesis, produces a different assessment of the strength of evidence against the null hypothesis than does the P-value, and it gives Bayesian procedures attractive frequency properties. However, the most important contribution of Bayesian methods is the way in which they affect both who participates in a scientific dialogue, and what is discussed. With the emphasis moved from "error rates" to evidence, content experts have an opportunity for their input to be meaningfully incorporated, making it easier for regulatory decisions to be made correctly.