Misplaced Pages

An Essay Towards Solving a Problem in the Doctrine of Chances

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from An Essay towards solving a Problem in the Doctrine of Chances) 1763 mathematics essay by Thomas Bayes

"An Essay Towards Solving a Problem in the Doctrine of Chances" is a work on the mathematical theory of probability by Thomas Bayes, published in 1763, two years after its author's death, and containing multiple amendments and additions due to his friend Richard Price. The title comes from the contemporary use of the phrase "doctrine of chances" to mean the theory of probability, which had been introduced via the title of a book by Abraham de Moivre. Contemporary reprints of the essay carry a more specific and significant title: A Method of Calculating the Exact Probability of All Conclusions Founded on Induction.

The essay includes theorems of conditional probability which form the basis of what is now called Bayes's Theorem, together with a detailed treatment of the problem of setting a prior probability.

Bayes supposed a sequence of independent experiments, each having as its outcome either success or failure, the probability of success being some number p between 0 and 1. But then he supposed p to be an uncertain quantity, whose probability of being in any interval between 0 and 1 is the length of the interval. In modern terms, p would be considered a random variable uniformly distributed between 0 and 1. Conditionally on the value of p, the trials resulting in success or failure are independent, but unconditionally (or "marginally") they are not. That is because if a large number of successes are observed, then p is more likely to be large, so that success on the next trial is more probable. The question Bayes addressed was: what is the conditional probability distribution of p, given the numbers of successes and failures so far observed. The answer is that its probability density function is

f ( p ) = ( n + 1 ) ! k ! ( n k ) ! p k ( 1 p ) n k  for  0 p 1 {\displaystyle f(p)={\frac {(n+1)!}{k!(n-k)!}}p^{k}(1-p)^{n-k}{\text{ for }}0\leq p\leq 1}

(and ƒ(p) = 0 for p < 0 or p > 1) where k is the number of successes so far observed, and n is the number of trials so far observed. This is what today is called the Beta distribution with parameters k + 1 and n − k + 1.

Outline

Bayes's preliminary results in conditional probability (especially Propositions 3, 4 and 5) imply the truth of the theorem that is named for him. He states:"If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, from hence I guess that the first event has also happened, the probability I am right is P/b.". Symbolically, this implies (see Stigler 1982):

P ( B A ) = P ( A B ) P ( A ) ,  if  P ( A ) 0 , {\displaystyle P(B\mid A)={\frac {P(A\cap B)}{P(A)}},{\text{ if }}P(A)\neq 0,}

which leads to Bayes's Theorem for conditional probabilities:

P ( A B ) = P ( B A ) P ( A ) P ( B ) ,  if  P ( B ) 0. {\displaystyle \Rightarrow P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}},{\text{ if }}P(B)\neq 0.}

However, it does not appear that Bayes emphasized or focused on this finding. Rather, he focused on the finding the solution to a much broader inferential problem:

"Given the number of times in which an unknown event has happened and failed the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named."

The essay includes an example of a man trying to guess the ratio of "blanks" and "prizes" at a lottery. So far the man has watched the lottery draw ten blanks and one prize. Given these data, Bayes showed in detail how to compute the probability that the ratio of blanks to prizes is between 9:1 and 11:1 (the probability is low - about 7.7%). He went on to describe that computation after the man has watched the lottery draw twenty blanks and two prizes, forty blanks and four prizes, and so on. Finally, having drawn 10,000 blanks and 1,000 prizes, the probability reaches about 97%.

Bayes's main result (Proposition 9) is the following in modern terms:

Assume a uniform prior distribution of the binomial parameter p {\displaystyle p} . After observing m {\displaystyle m} successes and n {\displaystyle n} failures,
P ( a < p < b m ; n ) = a b ( n + m m ) p m ( 1 p ) n d p 0 1 ( n + m m ) p m ( 1 p ) n d p . {\displaystyle P(a<p<b\mid m;n)={\frac {\int _{a}^{b}{n+m \choose m}p^{m}(1-p)^{n}\,dp}{\int _{0}^{1}{n+m \choose m}p^{m}(1-p)^{n}\,dp}}.}

It is unclear whether Bayes was a "Bayesian" in the modern sense. That is, whether he was interested in Bayesian inference, or merely in probability. Proposition 9 seems "Bayesian" in its presentation as a probability about the parameter p {\displaystyle p} . However, Bayes stated his question in a manner that suggests a frequentist viewpoint: he supposed that a ball is thrown at random onto a square table (this table is often misrepresented as a billiard table, and the ball as a billiard ball, but Bayes never describes them as such), and considered further balls that fall to the left or right of the first ball with probabilities p {\displaystyle p} and 1 p {\displaystyle 1-p} . The algebra is of course identical no matter which view is taken.

Richard Price and the existence of God

Richard Price discovered Bayes's essay and its now-famous theorem in Bayes's papers after Bayes's death. He believed that Bayes's Theorem helped prove the existence of God ("the Deity") and wrote the following in his introduction to the essay:

"The purpose I mean is, to show what reason we have for believing that there are in the constitution of things fixt laws according to which things happen, and that, therefore, the frame of the world must be the effect of the wisdom and power of an intelligent cause; and thus to confirm the argument taken from final causes for the existence of the Deity. It will be easy to see that the converse problem solved in this essay is more directly applicable to this purpose; for it shews us, with distinctness and precision, in every case of any particular order or recurrency of events, what reason there is to think that such recurrency or order is derived from stable causes or regulations in nature, and not from any irregularities of chance." (Philosophical Transactions of the Royal Society of London, 1763)

In modern terms this is an instance of the teleological argument.

Versions of the essay

Commentaries

  • G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes's Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
  • Stephen M. Stigler (1982). "Thomas Bayes's Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
  • Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.

References

  1. ^ Bayes, Mr; Price, Mr (1763). "An Essay towards Solving a Problem in the Doctrine of Chances. By the Late Rev. Mr. Bayes, F. R. S. Communicated by Mr. Price, in a Letter to John Canton, A. M. F. R. S" (PDF). Philosophical Transactions of the Royal Society of London. 53: 370–418. doi:10.1098/rstl.1763.0053. Archived from the original (PDF) on 2011-04-10. Retrieved 2011-09-25.
  2. Stigler, Stephen M (2013). "The True Title of Bayes's Essay". Statistical Science. 28 (3): 283–288. arXiv:1310.0173. doi:10.1214/13-STS438.

External links

Categories: