Over the years, many writers have implied that statistics can provide almost any result that is convenient at the time. Of course, honest practitioners use statistics in an attempt to quantify the probability that a certain hypothesis is true or false or to better understand what the data actually means.

The field of statistics has been developed over more than 200 years by famous mathematicians such as Laplace, Gauss, and Pascal and more recently Markov, Fisher, and Wiener. Pastor Thomas Bayes (1702-1761) appears to have had little influence on mathematics outside of statistics where Bayes' Theorem has found wide application.

As described in the FDA's 2010 Guidance ... for the Use of Bayesian Statistics in Medical Device Clinical Trials, "Bayesian statistics is an approach for learning from evidence as it accumulates. In clinical trials, traditional (frequentist) statistical methods may use information from previous studies only at the design stage. Then, at the data analysis stage, the information from these studies is considered as a complement to, but not part of, the formal analysis. In contrast, the Bayesian approach uses Bayes' Theorem to formally combine prior information with current information on a quantity of interest. The Bayesian idea is to consider the prior information and the trial results as part of a continual data stream, in which inferences are being updated each time new data becomes available." (1)

Bayes' Theorem

As explained in the FDA's Guidance document, prior information about a topic that you wish to investigate in more detail can be combined with new data using Bayes' Theorem. Symbolically, p(A|B) = p(B|A) x p(A)/p(B)

where: p(A|B) = the posterior probability of A occurring given condition B

p(B|A) = the likelihood probability of condition B being true when A occurs

p(A)=the prior probability of outcome A occurring regardless of condition B

p(B) = the evidence probability of condition B being true regardless of outcome A

Reference 2 discusses the application of Bayes' Theorem to a horse-racing example. In the past, a horse won five out of 12 races, but it had rained heavily before three of the five wins. One race was lost when it had rained. What is the probability that the horse will win the next race if it rains?

We want to know p(winning I it has rained). We know the following:

p(it has rained | winning) = 3/5 = 0.600

p(winning) = 5/12 = 0.417

p(raining before a race) = 4/12 = 0.333

[ILLUSTRATION OMITTED]

From Bayes Theorem, p(winning | it has rained) = 0.600 x...