Chapter 14 Probability

Where a mathematical reasoning can be had, it is as great a folly to make use of any other, as to grope for a thing in the dark, when you have a candle in your hand. - JOHN ARBUTHNOT

14.1 Event

We have studied about random experiment and sample space associated with an experiment. The sample space serves as an universal set for all questions concerned with the experiment.

Consider the experiment of tossing a coin two times. An associated sample space is $S=\{HH, HT, TH, TT\}$.

Now suppose that we are interested in those outcomes which correspond to the occurrence of exactly one head. We find that $HT$ and $TH$ are the only elements of $S$ corresponding to the occurrence of this happening (event). These two elements form the set $E=\{HT, TH\}$

We know that the set $E$ is a subset of the sample space $S$. Similarly, we find the following correspondence between events and subsets of S.

$\begin{array}{lllllll} \text{Description of events} & \text{Corresponding subset of ’ S ‘} \\ \text{Number of tails is exactly 2}& \text{A={TT}} \\ \text{Number of tails is atleast one}& \text{B={HT, TH, TT}} \\ \text{Number of heads is atmost one}& \text{C={HT, TH, TT}} \\ \text{Second toss is not head}& \text{D={HT, TT}} \\ \text{Number of tails is atmost two}& \text{S={HH, HT, TH, TT}} \\ \text{Number of tails is more than two}& \phi\\ \end{array}$

The above discussion suggests that a subset of sample space is associated with an event and an event is associated with a subset of sample space. In the light of this we define an event as follows.

Definition Any subset $E$ of a sample space $S$ is called an event.

14.1.1 Occurrence of an event

Consider the experiment of throwing a die. Let $E$ denotes the event " a number less than 4 appears". If actually ’ 1 ’ had appeared on the die then we say that event $E$ has occurred. As a matter of fact if outcomes are 2 or 3 , we say that event $E$ has occurred

Thus, the event $E$ of a sample space $S$ is said to have occurred if the outcome $\omega$ of the experiment is such that $\omega \in E$. If the outcome $\omega$ is such that $\omega \notin E$, we say that the event $E$ has not occurred.

14.1.2 Types of events

Events can be classified into various types on the basis of the elements they have.

1. Impossible and Sure Events The empty set $\phi$ and the sample space $S$ describe events. In fact $\phi$ is called an impossible event and S, i.e., the whole sample space is called the sure event.

To understand these let us consider the experiment of rolling a die. The associated sample space is

$ S=\{1,2,3,4,5,6\} $

Let $E$ be the event " the number appears on the die is a multiple of 7". Can you write the subset associated with the event $E$ ?

Clearly no outcome satisfies the condition given in the event, i.e., no element of the sample space ensures the occurrence of the event $E$. Thus, we say that the empty set only correspond to the event $E$. In other words we can say that it is impossible to have a multiple of 7 on the upper face of the die. Thus, the event $E=\phi$ is an impossible event.

Now let us take up another event $F$ “the number turns up is odd or even”. Clearly $F=\{1,2,3,4,5,6\}=,S$, i.e., all outcomes of the experiment ensure the occurrence of the event $F$. Thus, the event $F=S$ is a sure event.

2. Simple Event If an event $E$ has only one sample point of a sample space, it is called a simple (or elementary) event.

In a sample space containing $n$ distinct elements, there are exactly $n$ simple events.

For example in the experiment of tossing two coins, a sample space is

$ S=\{HH, HT, TH, TT\} $

There are four simple events corresponding to this sample space. These are

$ E_1=\{HH\}, E_2=\{HT\}, E_3=\{TH\} \text{ and } E_4=\{TT\} $

3. Compound Event If an event has more than one sample point, it is called a Compound event

For example, in the experiment of “tossing a coin thrice” the events

E: ‘Exactly one head appeared’

F: ‘Atleast one head appeared’

G: ‘Atmost one head appeared’ etc.

are all compound events. The subsets of $S$ associated with these events are

$ \begin{aligned} & E=\{HTT, THT, TTH\} \\ & F=\{HTT, THT, TTH, HHT, HTH, THH, HHH\} \\ & G=\{TTT, \text{ THT, HTT, TTH }\} \end{aligned} $

Each of the above subsets contain more than one sample point, hence they are all compound events.

14.1.3 Algebra of events

In the Chapter on Sets, we have studied about different ways of combining two or more sets, viz, union, intersection, difference, complement of a set etc. Like-wise we can combine two or more events by using the analogous set notations.

Let A, B, C be events associated with an experiment whose sample space is S.

1. Complementary Event For every event A, there corresponds another event $A^{\prime}$ called the complementary event to $A$. It is also called the event ’not $A$ ‘.

For example, take the experiment ‘of tossing three coins’. An associated sample space is

$ S=\{HHH, HHT, HTH, THH, HTT, THT, TTH, TTT\} $

Let $A=\{HTH, HHT, THH\}$ be the event ‘only one tail appears’

Clearly for the outcome HTT, the event A has not occurred. But we may say that the event ’not A’ has occurred. Thus, with every outcome which is not in A, we say that ’not A’ occurs.

Thus the complementary event ’not A’ to the event A is

$ A^{\prime}=\{HHH, HTT, THT, TTH, TTT\} $

or $ \quad \quad \quad \quad A^{\prime}=\{\omega: \omega \in S \text{ and } \omega \notin A\}=S-A . $

2. The Event ‘A or B’ Recall that union of two sets A and B denoted by A $\cup$ B contains all those elements which are either in A or in B or in both.

When the sets $A$ and $B$ are two events associated with a sample space, then ‘A $\cup B$ ’ is the event ’either $A$ or $B$ or both’. This event ‘A $\cup B$ ’ is also called ‘A or B’. Therefore

$ \begin{aligned} \text{ Event }^{\prime} A \text{ or } B^{\prime} & =A \cup B \\ & =\{\omega: \omega \in A \text{ or } \omega \in B\} \end{aligned} $

3. The Event ‘A and B’ We know that intersection of two sets $A \cap B$ is the set of those elements which are common to both A and B. i.e., which belong to both ‘A and B’.

If $A$ and $B$ are two events, then the set $A \cap B$ denotes the event ’ $A$ and $B$ ‘.

Thus, $ \quad A \cap B=\{\omega: \omega \in A and \omega \in B\} $

For example, in the experiment of ’throwing a die twice’ Let $A$ be the event ‘score on the first throw is six’ and B is the event ‘sum of two scores is atleast 11’ then

$ A=\{(6,1),(6,2),(6,3),(6,4),(6,5),(6,6)\}, \text{ and } B=\{(5,6),(6,5),(6,6)\} $

so $\quad A \cap B=\{(6,5),(6,6)\}$

Note that the set $A \cap B=\{(6,5),(6,6)\}$ may represent the event ’the score on the first throw is six and the sum of the scores is atleast 11 ‘.

4. The Event ‘A but not B’ We know that A-B is the set of all those elements which are in A but not in B. Therefore, the set A-B may denote the event ‘A but not B’. We know that

$ A-B=A \cap B^{\prime} $

14.1.4 Mutually exclusive events

In the experiment of rolling a die, a sample space is $S=\{1,2,3,4,5,6\}$. Consider events, $A$ ‘an odd number appears’ and $B$ ‘an even number appears’

Clearly the event A excludes the event B and vice versa. In other words, there is no outcome which ensures the occurrence of events A and B simultaneously. Here

$A=\{1,3,5\}$ and $B=\{2,4,6\}$

Clearly $A \cap B=\phi$, i.e., $A$ and $B$ are disjoint sets.

In general, two events $A$ and $B$ are called mutually exclusive events if the occurrence of any one of them excludes the occurrence of the other event, i.e., if they can not occur simultaneously. In this case the sets A and B are disjoint.

Again in the experiment of rolling a die, consider the events A ‘an odd number appears’ and event $B$ ‘a number less than 4 appears’

Obviously $A=\{1,3,5\}$ and $B=\{1,2,3\}$

Now $3 \in A$ as well as $3 \in B$

Therefore, A and B are not mutually exclusive events.

Remark Simple events of a sample space are always mutually exclusive.

14.1.5 Exhaustive events

Consider the experiment of throwing a die. We have $S=\{1,2,3,4,5,6\}$. Let us define the following events

A: ‘a number less than 4 appears’,

B: ‘a number greater than 2 but less than 5 appears’

and C: ‘a number greater than 4 appears’.

Then $A=\{1,2,3\}, B=\{3,4\}$ and $C=\{5,6\}$. We observe that

$ A \cup B \cup C=\{1,2,3\} \cup\{3,4\} \cup\{5,6\}=S . $

Such events $A, B$ and $C$ are called exhaustive events. In general, if $E_1, E_2, \ldots, E_n$ are $n$ events of a sample space $S$ and if

$ E_1 \cup E_2 \cup E_3 \cup \ldots \cup E_n=\bigcup\limits_{i=1}^{n} E_i=S $

then $E_1, E_2, \ldots, E_n$ are called exhaustive events.In other words, events $E_1, E_2, \ldots, E_n$ are said to be exhaustive if atleast one of them necessarily occurs whenever the experiment is performed.

Further, if $E_i \cap E_j=\phi$ for $i \neq j$ i.e., events $E_i$ and $E_j$ are pairwise disjoint and $\bigcup\limits_{i=1}^{n} E_i=S$, then events $E_1, E_2, \ldots, E_n$ are called mutually exclusive and exhaustive events.

We now consider some examples.

14.2 Axiomatic Approach to Probability

In earlier sections, we have considered random experiments, sample space and events associated with these experiments. In our day to day life we use many words about the chances of occurrence of events. Probability theory attempts to quantify these chances of occurrence or non occurrence of events.

In earlier classes, we have studied some methods of assigning probability to an event associated with an experiment having known the number of total outcomes.

Axiomatic approach is another way of describing probability of an event. In this approach some axioms or rules are depicted to assign probabilities.

Let $S$ be the sample space of a random experiment. The probability $P$ is a real valued function whose domain is the power set of $S$ and range is the interval $[0,1]$ satisfying the following axioms

$\begin{matrix} \text{ (i) For any event } E, P(E) \geq 0 & \text{ (ii) } P(S)=1\end{matrix} $

(iii) If $E$ and $F$ are mutually exclusive events, then $P(E \cup F)=P(E)+P(F)$.

It follows from (iii) that $P(\phi)=0$. To prove this, we take $F=\phi$ and note that $E$ and $\phi$ are disjoint events. Therefore, from axiom (iii), we get

$ P(E \cup \phi)=P(E)+P(\phi) \text{ or } \quad P(E)=P(E)+P(\phi) \text{ i.e. } P(\phi)=0 \text{. } $

Let $S$ be a sample space containing outcomes $\omega_1, \omega_2, \ldots, \omega_n$, i.e.,

$ S=\{\omega_1, \omega_2, \ldots, \omega_n\} $

It follows from the axiomatic definition of probability that

(i) $0 \leq P(\omega_i) \leq 1$ for each $\omega_i \in S$

(ii) $P(\omega_1)+P(\omega_2)+\ldots+P(\omega_n)=1$

(iii) For any event $A, P(A)=\sum P(\omega_i), \omega_i \in A$.

Note - It may be noted that the singleton $\{\omega_i\}$ is called elementary event and for notational convenience, we write $P(\omega_i)$ for $P(\{\omega_i\})$.

For example, in ‘a coin tossing’ experiment we can assign the number $\frac{1}{2}$ to each of the outcomes $H$ and $T$.

i.e. $ \quad \quad \quad \quad P(H)=\frac{1}{2} \text{ and } P(T)=\frac{1}{2} $

Clearly this assignment satisfies both the conditions i.e., each number is neither less than zero nor greater than 1 and

$ P(H)+P(T)=\frac{1}{2}+\frac{1}{2}=1 $

Therefore, in this case we can say that probability of $H=\frac{1}{2}$, and probability of $T=\frac{1}{2}$

If we take $P(H)=\frac{1}{4}$ and $P(T)=\frac{3}{4}\quad \quad \quad \quad \ldots (2)$

Does this assignment satisfy the conditions of axiomatic approach?

Yes, in this case, probability of $H=\frac{1}{4}$ and probability of $T=\frac{3}{4}$.

We find that both the assignments (1) and (2) are valid for probability of $H$ and $T$.

In fact, we can assign the numbers $p$ and $(1-p)$ to both the outcomes such that $0 \leq p \leq 1$ and $P(H)+P(T)=p+(1-p)=1$

This assignment, too, satisfies both conditions of the axiomatic approach of probability. Hence, we can say that there are many ways (rather infinite) to assign probabilities to outcomes of an experiment. We now consider some examples.

14.2.1 Probability of an event

Let $S$ be a sample space associated with the experiment ’examining three consecutive pens produced by a machine and classified as Good (non-defective) and bad (defective)’. We may get $0,1,2$ or 3 defective pens as result of this examination.

A sample space associated with this experiment is

$ S=\{BBB, BBG, BGB, GBB, BGG, GBG, GGB, GGG\} $

where $B$ stands for a defective or bad pen and $G$ for a non - defective or good pen.

Let the probabilities assigned to the outcomes be as follows

$\begin{array}{lllllllll} \text{Sample point:} & BBB & BBG & BGB & GBB & BGG & GBG & GGB & GGG \\ \\ \text{Probability: } & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} & \frac{1}{8} \end{array} $

Let event A: there is exactly one defective pen and event B: there are atleast two defective pens.

Hence $A=\{BGG, GBG, GGB\}$ and $B=\{BBG, BGB, GBB, BBB\}$

Now $\quad P(A)=\sum P(\omega_i), \forall \omega_i \in A$

$ =P(BGG)+P(GBG)+P(GGB)=\frac{1}{8}+\frac{1}{8}+\frac{1}{8}=\frac{3}{8} $

and

$ \begin{aligned} P(B) & =\sum P(\omega_i), \forall \omega_i \in B \\ & =P(BBG)+P(BGB)+P(GBB)+P(BBB)=\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}=\frac{4}{8}=\frac{1}{2} \end{aligned} $

Let us consider another experiment of “tossing a coin “twice”

The sample space of this experiment is $S=\{HH, HT, TH, TT\}$

Let the following probabilities be assigned to the outcomes

$ P(HH)=\frac{1}{4}, P(HT)=\frac{1}{7}, P(TH)=\frac{2}{7}, P(TT)=\frac{9}{28} $

Clearly this assignment satisfies the conditions of axiomatic approach. Now, let us find the probability of the event $E$ : ‘Both the tosses yield the same result’.

Here $\quad \quad \quad \quad E=\{HH, TT\}$

Now $\quad \quad \quad \quad P(E)=\Sigma P(w_i)$, for all $w_i \in E$

$ =P(HH)+P(TT)=\frac{1}{4}+\frac{9}{28}=\frac{4}{7} $

For the event $F$ : ’exactly two heads’, we have $F=\{HH\}$

and $ \quad \quad \quad \quad P(F)=P(HH)=\frac{1}{4} $

14.2.2 Probabilities of equally likely outcomes

Let a sample space of an experiment be

$S=\{\omega_1, \omega_2, \ldots, \omega_n\} . $

Let all the outcomes are equally likely to occur, i.e., the chance of occurrence of each simple event must be same.

i.e. $ \quad \quad \quad \quad P(\omega_i)=p, \text{ for all } \omega_i \in S \text{ where } 0 \leq p \leq 1 $

$ \begin{aligned} \text{Since } \quad \quad \quad \quad& \sum _{i=1}^{n} P(\omega_i)=1 \text{ i.e., } p+p+\ldots+p(n \text{ times })=1 \\ \text{or}\quad \quad \quad \quad & n p=1 \text{ i.e., } p=\frac{1}{n} \end{aligned} $

Let $S$ be a sample space and $E$ be an event, such that $n(S)=n$ and $n(E)=m$. If each out come is equally likely, then it follows that

$ P(E)=\frac{m}{n} \quad=\frac{\text{ Number of outcomes favourable to } E}{\text{ Total possible outcomes }} $

14.2.3 Probability of the event ’ $A$ or $B$ '

Let us now find the probability of event ‘A or B’, i.e., $P(A \cup B)$

Let $A=\{HHT, HTH, THH\}$ and $B=\{HTH, THH, HHH\}$ be two events associated with ’tossing of a coin thrice’

Clearly $A \cup B=\{HHT, HTH, THH, HHH\}$

Now $ \quad \quad \quad \quad P(A \cup B)=P(HHT)+P(HTH)+P(THH)+P(HHH) $

If all the outcomes are equally likely, then

$ P(A \cup B)=\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}=\frac{4}{8}=\frac{1}{2} $

Also $ \quad \quad \quad \quad P(A)=P(HHT)+P(HTH)+P(THH)=\frac{3}{8} $

and $ \quad \quad \quad \quad P(B)=P(HTH)+P(THH)+P(HHH)=\frac{3}{8} $

Therefore $ \quad \quad \quad \quad P(A)+P(B)=\frac{3}{8}+\frac{3}{8}=\frac{6}{8} $

It is clear that $ \quad \quad P(A \cup B) \neq P(A)+P(B)$

The points HTH and THH are common to both A and B. In the computation of $P(A)+P(B)$ the probabilities of points HTH and THH, i.e., the elements of $A \cap B$ are included twice. Thus to get the probability $P(A \cup B)$ we have to subtract the probabilities of the sample points in $A \cap B$ from $P(A)+P(B)$

i.e.

$ \begin{aligned} P(A \cup B) & =P(A)+P(B)-\sum P(\omega_i), \forall \omega_i \in A \cap B \\ & =P(A)+P(B)-P(A \cap B) \end{aligned} $

Thus we observe that, $P(A \cup B)=P(A)+P(B)-P(A \cap B)$

In general, if $A$ and $B$ are any two events associated with a random experiment, then by the definition of probability of an event, we have

$ P(A \cup B)=\sum p(\omega_i), \forall \omega_i \in A \cup B $

Since $\quad \quad \quad A\cup B = (A-B) \cup (A \cap B)\cup (B-A)$

we have

$P(A \cup B)=[\sum P(\omega_i) \forall \omega_i \in(A-B)]+[\sum P(\omega_i) \forall \omega_i \in A \cap B]+[\sum P(\omega_i) \forall \omega_i \in B-A]$

(because $A-B, A \cap B$ and $B-A$ are mutually exclusive)

Also $P(A)+P(B)=\left[\sum p(\omega_i) \forall \omega_i \in A\right]+\left[\sum p(\omega_i) \forall \omega_i \in B\right]$

$ \begin{aligned} = & {\left[\sum P(\omega_i) \forall \omega_i \in(A-B) \cup(A \cap B)\right]+\left[\sum P(\omega_i) \forall \omega_i \in(B-A) \cup(A \cap B)\right] } \\ = & {\left[\sum P(\omega_i) \forall \omega_i \in(A-B)\right]+\left[\sum P(\omega_i) \forall \omega_i \in(A \cap B)\right]+\left[\sum P(\omega_i) \forall \omega_i \in(B-A)\right]+} \\ & {\left[\sum P(\omega_i) \forall \omega_i \in(A \cap B)\right] } \\ = & P(A \cup B)+\left[\sum P(\omega_i) \forall \omega_i \in A \cap B\right] \quad[using(1)] \\ = & P(A \cup B)+P(A \cap B) . \end{aligned} $

Hence $P(A \cup B)=P(A)+P(B)-P(A \cap B)$.

Alternatively, it can also be proved as follows:

$A \cup B=A \cup(B-A)$, where $A$ and $B-A$ are mutually exclusive, and $B=(A \cap B) \cup(B-A)$, where $A \cap B$ and $B-A$ are mutually exclusive. Using Axiom (iii) of probability, we get

$ \quad \quad \quad \quad P(A \cup B)=P(A)+P(B-A) \quad \quad \quad \quad \ldots(2) $

and $ \quad \quad \quad \quad P(B)=P(A \cap B)+P(B-A) \quad \quad \quad \quad \ldots(3) $

Subtracting (3) from (2) gives

$ P(A \cup B) - P(B) = P(A) - P(A \cap B) $

$ P(A \cup B) = P(A) + P(B) - P(A \cap B) $

The above result can further be verified by observing the Venn Diagram (Fig 14.1)

image

If $A$ and $B$ are disjoint sets, i.e., they are mutually exclusive events, then $A \cap B=\phi$ Therefore

$ P(A \cap B)=P(\phi)=0 $

Thus, for mutually exclusive events $A$ and $B$, we have

$ P(A \cup B)=P(A)+P(B) $

which is Axiom (iii) of probability.

14.2.4 Probability of event ’not $A$’

Consider the event $A=\{2,4,6,8\}$ associated with the experiment of drawing a card from a deck of ten cards numbered from 1 to 10 . Clearly the sample space is $S=\{1,2,3, \ldots, 10\}$

If all the outcomes $1,2, \ldots, 10$ are considered to be equally likely, then the probability of each outcome is $\frac{1}{10}$

Now

$ \begin{aligned} P(A) & =P(2)+P(4)+P(6)+P(8) \\ & =\frac{1}{10}+\frac{1}{10}+\frac{1}{10}+\frac{1}{10}=\frac{4}{10}=\frac{2}{5} \end{aligned} $

Also event ’ not $A^{\prime}=A^{\prime}=\{1,3,5,7,9,10\}$

Now $ \quad\quad\quad\quad P(A^{\prime})=P(1)+P(3)+P(5)+P(7)+P(9)+P(10) $

$ =\frac{6}{10}=\frac{3}{5} $

Thus, $ \quad\quad\quad\quad P(A^{\prime})=\frac{3}{5}=1-\frac{2}{5}=1-P(A) $

Also, we know that $A^{\prime}$ and $A$ are mutually exclusive and exhaustive events i.e.,

$ A \cap A^{\prime}=\phi \text{ and } A \cup A^{\prime}=S $

or $\quad P(A \cup A^{\prime})=P(S)$

Now $\quad P(A)+P(A^{\prime})=1, \quad$ by using axioms (ii) and (iii).

or $\quad P(A^{\prime})=P(not A)=1-P(A)$

We now consider some examples and exercises having equally likely outcomes unless stated otherwise.

Summary

In this Chapter, we studied about the axiomatic approach of probability. The main features of this Chapter are as follows:

Event: A subset of the sample space

Impossible event : The empty set

Sure event: The whole sample space

Complementary event or ’not event’ : The set $A$ ’ or $S-A$

Event A or B: The set A $\cup$ B

Event $A$ and $B$ : The set $A \cap B$

Event $A$ and not $B$ : The set $A-B$

Mutually exclusive event: $A$ and $B$ are mutually exclusive if $A \cap B=\phi$

Exhaustive and mutually exclusive events: Events $E_1, E_2, \ldots, E_n$ are mutually exclusive and exhaustive if $E_1 \cup E_2 \cup \ldots \cup E_n=S$ and $E_i \cap E_j=\phi \forall i \neq j$ Probability: Number $P(\omega_i)$ associated with sample point $\omega_i$ such that

(i) $0 \leq P(\omega_i) \leq 1$

(ii) $\sum P(\omega_i)$ for all $\omega_i \in S=1$

(iii) $P(A)=\sum P(\omega_i)$ for all $\omega_i \in A$. The number $P(\omega_i)$ is called probability of the outcome $\omega_i$

Equally likely outcomes: All outcomes with equal probability

Probability of an event: For a finite sample space with equally likely outcomes Probability of an event $P(A)=\frac{n(A)}{n(S)}$, where $n(A)=$ number of elements in the set $A, n(S)=$ number of elements in the set $S$.

If $A$ and $B$ are any two events, then

$ \begin{aligned} & P(A \text{ or } B)=P(A)+P(B)-P(A \text{ and } B) \\ & \text{ equivalently, } P(A \cup B)=P(A)+P(B)-P(A \cap B) \end{aligned} $

If $A$ and $B$ are mutually exclusive, then $P(A$ or $B)=P(A)+P(B)$

If $A$ is any event, then

$ P(not A)=1-P(A) $

Historical Note

Probability theory like many other branches of mathematics, evolved out of practical consideration. It had its origin in the 16th century when an Italian physician and mathematician Jerome Cardan (1501-1576) wrote the first book on the subject “Book on Games of Chance” (Biber de Ludo Aleae). It was published in 1663 after his death.

In 1654, a gambler Chevalier de Metre approached the well known French Philosopher and Mathematician Blaise Pascal (1623-1662) for certain dice problem. Pascal became interested in these problems and discussed with famous French Mathematician Pierre de Fermat (1601-1665). Both Pascal and Fermat solved the problem independently. Besides, Pascal and Fermat, outstanding contributions to probability theory were also made by Christian Huygenes (16291665), a Dutchman, J. Bernoulli (1654-1705), De Moivre (1667-1754), a Frenchman Pierre Laplace (1749-1827), the Russian P.L Chebyshev (18211897), A. A Markov (1856-1922) and A. N Kolmogorove (1903-1987). Kolmogorov is credited with the axiomatic theory of probability. His book ‘Foundations of Probability’ published in 1933, introduces probability as a set function and is considered a classic.



Table of Contents