Probability

The theory of probabilities and the theory of errors now constitute a formidable body of great mathematical interest and of great practical importance.

R.S. Woodward

14.1 Probability - A Theoretical Approach

Let us consider the following situation :

Suppose a coin is tossed at random.

When we speak of a coin, we assume it to be ‘fair’, that is, it is symmetrical so that there is no reason for it to come down more often on one side than the other. We call this property of the coin as being ‘unbiased’. By the phrase ‘random toss’, we mean that the coin is allowed to fall freely without any bias or interference.

We know, in advance, that the coin can only land in one of two possible ways either head up or tail up (we dismiss the possibility of its ’landing’ on its edge, which may be possible, for example, if it falls on sand). We can reasonably assume that each outcome, head or tail, is as likely to occur as the other. We refer to this by saying that the outcomes head and tail, are equally likely.

For another example of equally likely outcomes, suppose we throw a die once. For us, a die will always mean a fair die. What are the possible outcomes? They are 1, 2, 3, 4, 5, 6. Each number has the same possibility of showing up. So the equally likely outcomes of throwing a die are 1,2,3,4,5 and 6.

Are the outcomes of every experiment equally likely? Let us see.

Suppose that a bag contains 4 red balls and 1 blue ball, and you draw a ball without looking into the bag. What are the outcomes? Are the outcomes - a red ball and a blue ball equally likely? Since there are 4 red balls and only one blue ball, you would agree that you are more likely to get a red ball than a blue ball. So, the outcomes (a red ball or a blue ball) are not equally likely. However, the outcome of drawing a ball of any colour from the bag is equally likely. So, all experiments do not necessarily have equally likely outcomes.

However, in this chapter, from now on, we will assume that all the experiments have equally likely outcomes.

In Class IX, we defined the experimental or empirical probability $\mathrm{P}(\mathrm{E})$ of an event $\mathrm{E}$ as

$$ \mathrm{P}(\mathrm{E})=\frac{\text { Number of trials in which the event happened }}{\text { Total number of trials }} $$

The empirical interpretation of probability can be applied to every event associated with an experiment which can be repeated a large number of times. The requirement of repeating an experiment has some limitations, as it may be very expensive or unfeasible in many situations. Of course, it worked well in coin tossing or die throwing experiments. But how about repeating the experiment of launching a satellite in order to compute the empirical probability of its failure during launching, or the repetition of the phenomenon of an earthquake to compute the empirical probability of a multistoreyed building getting destroyed in an earthquake?

In experiments where we are prepared to make certain assumptions, the repetition of an experiment can be avoided, as the assumptions help in directly calculating the exact (theoretical) probability. The assumption of equally likely outcomes (which is valid in many experiments, as in the two examples above, of a coin and of a die) is one such assumption that leads us to the following definition of probability of an event.

The theoretical probability (also called classical probability) of an event E, written as $\mathrm{P}(\mathrm{E})$, is defined as

$$ P(E)=\frac{\text { Number of outcomes favourable to } E}{\text { Number of all possible outcomes of the experiment }} \text {, } $$

where we assume that the outcomes of the experiment are equally likely.

We will briefly refer to theoretical probability as probability.

This definition of probability was given by Pierre Simon Laplace in 1795.

Probability theory had its origin in the 16th century when an Italian physician and mathematician J.Cardan wrote the first book on the subject, The Book on Games of Chance. Since its inception, the study of probability has attracted the attention of great mathematicians. James Bernoulli (1654 - 1705), A. de Moivre (1667 - 1754), and Pierre Simon Laplace are among those who made significant contributions to this field. Laplace’s Theorie Analytique des Probabilités, 1812, is considered to be the greatest contribution by a single person to the theory of probability. In recent years, probability has been used extensively in many areas such as biology, economics, genetics, physics, sociology etc.

Pierre Simon Laplace (1749 - 1827)

Let us find the probability for some of the events associated with experiments where the equally likely assumption holds.

14.2 Summary

In this chapter, you have studied the following points :

1. The theoretical (classical) probability of an event $\mathrm{E}$, written as $\mathrm{P}(\mathrm{E})$, is defined as

$$ P(E)=\frac{\text { Number of outcomes favourable to } E}{\text { Number of all possible outcomes of the experiment }} $$

where we assume that the outcomes of the experiment are equally likely.

2. The probability of a sure event (or certain event) is 1 .

3. The probability of an impossible event is 0 .

4. The probability of an event $\mathrm{E}$ is a number $\mathrm{P}(\mathrm{E})$ such that

$$ 0 \leq \mathrm{P}(\mathrm{E}) \leq 1 $$

5. An event having only one outcome is called an elementary event. The sum of the probabilities of all the elementary events of an experiment is 1 .

6. For any event $E, P(E)+P(\bar{E})=1$, where $\bar{E}$ stands for ’not $E$ ‘. $E$ and $\bar{E}$ are called complementary events.

A NOTE TO THE READER

The experimental or empirical probability of an event is based on what has actually happened while the theoretical probability of the event attempts to predict what will happen on the basis of certain assumptions. As the number of trials in an experiment, go on increasing we may expect the experimental and theoretical probabilities to be nearly the same.



Table of Contents