Revision as of 19:16, 2 December 2018 by Adams391 (Talk | contribs)

Bernoulli Trials and Binomial Distribution

Thus far, we have observed how $ e $ relates to compound interest and its unique properties involving Euler's Formula and the imaginary number $ i $. In this section, we will take a look at a potentially mysterious instance of how $ e $ appears when working with probability and will attempt to discover some explanations of why this occurs at all. Before we do this, however, let us begin with a few definitions.


Bernoulli Trial
Earlier in the text, we briefly learned of Daniel Bernoulli and relating $ e $ to compound interest using the limit definition $ e=\lim_{n \to \infty}\left(1+\frac1n\right)^n $. We will now observe more of Bernoulli's work in the form of the Bernoulli Trial.

A Bernoulli Trial is a discrete experiment with two possible outcomes, described as "success" or "failure" of some event $ E $. In each trial, the event $ E $ has a constant probability $ p $ of occurring. Therefore, with one trial, the probability of the event occurring once is $ P(1)=p $, and the probability of the event occurring zero times is $ P(0)=1-p $.


Binomial Distribution
A Binomial Distribution involves repeating a Bernoulli Trial some number of times, $ n $, each with the same probability $ p $. As such, the binomial distribution depends on both $ n $ and $ p $. An example distribution with varying values for $ n $ and $ p $ is shown in the image below[1].

Binomial Distribution.png

As seen in this image, the overall range of values depends solely on $ n $ as, with $ n $ trials, the event can only occur $ 0 $ to $ n $ times. The peak of the distribution, however, occurs at $ np $, and the overall range set by $ n $ will affect how centralized the distribution is, with smaller ranges resulting in less variance and larger ranges resulting in greater variance.

Finally, let us consider the probability that the event occurs some number of times, $ i $. We have already stated that, for each individual trial, the probability of the event occurring is $ p $. For multiple trials, however, the event must occur $ i $ times, each with probability p, and the event will not occur the remaining $ n-i $ times, each with probability $ 1-p $. These $ i $ successes, however, can occur in $ {n \choose i} $ different ways. Therefore, the probability of the event occurring $ i $ times can be found using the following formula:


$ \begin{align} P(i)=p^i(1-p)^{n-i}{n \choose i}=p^i(1-p)^{n-i}\frac{n!}{i!(n-i)!} \end{align} $


where $ P(i) $ denotes the probability of the event occurring $ i $ times after $ n $ trials, each with probability $ p $.


$ e $ in Binomial Distribution

So what does all of this have to do with $ e $? Well, in order to determine that, let us consider the following example.


Suppose you are playing a game in which you roll a six-sided die, and you "lose" if you ever roll a $ 1 $. If you roll the die six times, what is the probability that you will "win" this game by not rolling a single $ 1 $?
Well, for each roll, there is a $ \frac{1}{6} $ chance that you will roll a $ 1 $, so you have a $ \frac{5}{6} $ chance of not rolling a $ 1 $ for any individual roll. Therefore, the probability of not rolling a $ 1 $ on any of the six rolls is $ (\frac{5}{6})^6\approx0.33489797668 $.
Now, let us increase the value from six to twenty. The probability of not rolling a $ 1 $ on a twenty-sided die is $ \frac{19}{20} $, so the probability of never rolling a $ 1 $ after twenty rolls is $ (\frac{19}{20})^20\approx0.3584859224 $.
Lucky for you, the odds have increased. Now, you may be wondering how high can they get? If

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett