Intro
Here’s some important information:
- The probability my sister will call me today is @@@1/5@@@.
- The probability my neighbor will come and ask for sugar today is @@@1/7@@@.
Assuming my neighbor asked for sugar today, what is the probability that my sister will call today? Assuming my sister called me what is the probability my neighbor will come and ask for sugar? Before you answer, I can assure you that this is not a trick question: my sister is not my neighbor or has conspired with the neighbor. She doesn’t even know the neighbor.
In both cases then, the additional information provided is irrelevant in the sense that the probabilities will remain the same, @@@1/5@@@ and @@@1/7@@@ respectively. This is a demonstration of the notion of independnece.
In each of the cases above we asked for conditional probabilities, e.g. the probability of my sister calling conditioned on my neighbor asking for sugar, and “irrelevant” meant that conditioning one on the other does not change the probability. That’s what the notion of independence in probability theory means. Sounds too simple, right? This is perhaps the most important notion in probability because it appears naturally, allows for calculations, and leads to some very deep insights on randomness we will explore when discussing limit theorems.
Here’s a simulation illustrating two sequences of @@@150@@@ tosses each. All tosses are “fair”, but only one of the sequences consists of independent tosses (of course we explain this precisely later). Can you guess which one?
- When tosses are independent, anything can happen and everything is very erratic. Or:
These are the days when anything goes.
Everyday is a winding road.
- In the other sequence we used a fair, but lazy coin. It has no preference for Heads or for Tails, but if it lands on one of the sides, it tends more to stay there a little while and appears more “regular” (since we’re talking about lazy, think of tanning on the beach: you start equally likely on belly or back, stay for a little while, swap, and repeat).
Definition and Applications
Consider an event @@@B@@@, with @@@P(B)>0@@@. We will declare that @@@A@@@ is independent of @@@B@@@ if @@@P(A|B)=P(A)@@@. That is, “knowing” or “assuming” @@@B@@@ has occurred does not affect the probability of @@@A@@@. There’s one little problem in making this our official definition: asymmetry between @@@A@@@ and @@@B@@@. Note that if @@@P(A)=0@@@, then clearly @@@P(A|B)=0=P(A)@@@, but if @@@P(B)=0@@@ we can’t even define @@@P(A|B)@@@. What do we do? Observe that @@@P(A|B)=P(A\cap B)/P(B)@@@. This is equal to @@@P(A)@@@ if @@@P(A\cap B)=P(A) P(B)@@@, a nicer expression because: lack of a denominator and the inherent symmetry between @@@A@@@ and @@@B@@@ (both play the same role). We therefore declare this as our definition:
Events @@@A@@@ and @@@B@@@ are independent with respect to the probability measure @@@P@@@ if @@@P(A \cap B)=P(A)P(B)@@@
I’d like to stress that independence is always tied to the probability measure. Two events may be independent under one choice of probability measure yet not independent under another. Here is an important example.
Let @@@\Omega=\{bb,bw,wb,ww\}@@@, all ordered pairs of length @@@2@@@ formed by the letters @@@b@@@ (black ball) and @@@w@@@ (white ball), AKA @@@\{b,w\}^2@@@. We will construct two different probability measures on @@@\Omega@@@. Let @@@A=\{\mbox{first is black}\}=\{bb,bw\}@@@ and @@@B=\{\mbox{second is black}\}= \{bb,wb\}@@@. We will construct two probability measures on @@@\Omega@@@ so that @@@A@@@ and @@@B@@@ will be independent with respect to one but not independent with respect to the other.
We do this as follows. Put two identical white balls and and two identical black balls in an opaque jar. Consider the following two sampling mechanisms:
- Pick one ball, record its color, return the ball, picking again, and record the color.
- Pick one ball, record its color, pick another ball, and record its color.
The two mechanisms lead to two different probability measures on @@@\Omega@@@. Let @@@P_1@@@ and @@@P_2@@@ denote the probability measures on @@@\Omega@@@ induced by the first and second sampling mechanisms, respectively.
To understand what @@@P_1@@@ and @@@P_2@@@ are, think first of the balls as distinct objects labeled @@@b_1,b_2,w_1,w_2@@@ and let @@@\Omega'@@@ be the set of balls @@@\{b_1,b_2,w_1,w_2\}@@@. In the first mechanism we randomly (=uniformly) sample from @@@\Omega'\times \Omega'@@@, then omitting the numerical subscripts to obtain our outcome in @@@\Omega@@@. With this, @@@P_1@@@ is simply the uniform measure on @@@\Omega@@@. Thus @@@P_1(A)= P_2(A)= \frac 12@@@ and @@@P_1(A \cap B) = 1/4 = P_1(A) P_1(B)@@@
In the second mechanism, we randomly (=uniformly) sample from all permutations of two elements in @@@\Omega'@@@ (ordered list of two distinct elements: we do not allow @@@b_1b_1@@@), then omit the subscripts. There are @@@4*3@@@ distinct permutations. Out of which exactly @@@2@@@ having two black balls (@@@b_1b_2@@@ and @@@b_2b_1@@@) and @@@2*2@@@ with first black and second white, so @@@P_2 (A) = 6/12=1/2@@@. Similarly @@@P_2(B) = 1/2@@@. Since @@@A\cap B@@@ is two black balls, we have @@@P_2(A \cap B) = 2/12=1/6 < 1/4 = P_2(A) P_2(B)@@@, therefore under @@@P_2@@@ the events @@@A@@@ and @@@B@@@ are not independent.
Let’s test the definition of independence.
Show that if @@@P(A)\in \{0,1\}@@@, then @@@A@@@ is independent of any event.
We’re in dire need of an example.
This is known as the Gambler’s Fallacy or sometimes as the Monte-Carlo Fallacy. We are tossing a fair coin 10 times.
- What is the probability that all will land Heads?
- Suppose the first @@@9@@@ tosses were all Heads. What is the probability that the last will be Heads?
Our sample space here is all binary sequences of length @@@10@@@, consisting of H and T, and the probability measure on this sample space is uniform: each outcome is equally likely, and therefore has probability of @@@1/2^{10} = 1/1024@@@. The event in part 1. consists of exactly one sequence and therefore has probability @@@1/1024@@@. Pretty rare. As for 2., let @@@A@@@ be the event “10’th toss is Heads” and @@@B@@@ be the event “first 9 tosses were Heads”. @@@A\cap B@@@ is the event of part 1., while @@@B@@@ has exactly two sequences. Therefore, @@@P(A|B)=1/2=P(A)@@@. Or, @@@A@@@ is independent of @@@B@@@.
Some would say that the longer the drought of Tails, the more likely the next toss will be a Tail. This is wrong, at least in our context. Under our assumption of the coin being fair, a long drought of Tails is very unlikely, yet the fact that it has already happened is irrelevant for future probabilities: it doesn’t magically affect the fairness of the coin.
Gambler’s fallacy is a serious problem, much beyond plain gambling. https://www.youtube.com/watch?v=4eVluL-idkM
Let @@@P@@@ be the uniform probability on @@@\{1,\dots,6\}@@@. Find two events which are not independent.
Now that we understand a little, let’s develop some theory. Let’s record an important yet simple result.
Suppose @@@A,B@@@ are independent. Then @@@A^c@@@ and @@@B@@@ are independent.
Therefore,
$$ P(B)(1-P(A))=P(A^c\cap B).$$But LHS is @@@P(B)P(A^c)@@@.
■Note that repeated application of the proposition shows that @@@A@@@ and @@@B@@@ are independent then so are @@@A^c@@@ and @@@B@@@, @@@A@@@ and @@@B^c@@@ and @@@A^c@@@ and @@@B^c@@@.
It’s not always immediately clear that two events are independent.
Aron, Baron and Caron are each tossing a fair coin. Let @@@A@@@ be the event that Aron and Baron tosses are different, let @@@B@@@ be the event that the tosses of Baron and Caron are different, and let @@@C@@@ be the event that the tosses of Caron and Aron are different. Each of the pairs @@@A,B@@@, @@@B,C@@@ and @@@A,C@@@ are independent. Indeed, the probability of each of @@@A,B,C@@@ is @@@4/8=1/2@@@ (why?), while the probability of @@@A\cap B@@@ is, let see: Aron different than Baron, and Baron different from Caron, we have exactly two such sequences HTH and THT. Therefore @@@P(A\cap B) = \frac 28 = \frac 14= P(A) P(B)@@@. Ok. No biggie. What about the triple? Well, @@@A\cap B \cap C@@@ is … empty. Aron’s toss must be different from Baron’s, Baron’s different from Caron’s (which, together makes Aron’s and Caron’s the same, right?) and Caron’s different from Aron’s. Wait, what? I told you @@@A \cap B \cap C =\emptyset@@@. Therefore
$$ P(A \cap B \cap C)= 0 \ne \frac 12 \times \frac 12 \times \frac 12 = P(A) P(B) P(C).$$Conclusion: any two are independent. All three? not independent.
Here’s another one, showing how calculations are made easy with independence.
The probability I will be late to pick up my son from school is @@@60\%@@@, and the probability my son will leave the school late is @@@30\%@@@. Assuming the events are independent, what is the probability that
- Both of us will be late?
- None of us will be late?
- At least one of use will be late?
Denote by @@@A@@@ and @@@B@@@ the respective events. The first question asks for @@@P(A\cap B)@@@. Because of independence @@@P(A \cap B) = P(A) P(B)=0.18@@@. The second asks for @@@P(A^c \cap B^c)@@@. By Proposition Proposition 1, we know that @@@A^c@@@ and @@@B^c@@@ are independent, therefore the @@@P(A^c \cap B^c) = P(A^c) P(B^c) = 0.4*0.7 = 0.28@@@. The last question is @@@P(A\cup B)@@@. By the inclusion-exclusion formula, this is equal to @@@P(A)+ P(B) - P(A\cap B) = 0.9 - 0.18 = 0.72@@@. Can you find the relation between this question and the last? Yes. you’re right, this event @@@A\cup B@@@ is the complement of the event from the second question @@@A^c \cap B^c@@@, by de Morgan’s laws, so we didn’t really need inclusion-exclusion.
Time for more examples of events which are not independent.
An jar has @@@N\ge 1@@@ balls: @@@K\ge 0@@@ white balls @@@b\ge 0@@@ black balls and , with @@@N=b+K@@@. We select @@@n\le N@@@ balls without replacement (picking one out, then another one, etc…). Why the weird choice of @@@K@@@ for number of white balls? Convention. What is the probability that the first is white? The second is white? … the @@@n@@@-th is white? Are the events “first is white” and “@@@j@@@-th is white” independent?
Of course, the probability that the first is white is @@@K/(b+K)@@@. But this is the same as the probability that the @@@j@@@-th is white for any @@@j\le n@@@. Let’s calculate, then give an intuitive explanation.
What is the number of ways to simply select @@@j@@@ balls? To help answer the question, let’s label the balls so they are all distinct (note that we are not changing the color, just adding another attribute that will help with computation. This is just a computational device, nothing else). Now that all balls are distinct:
- The number of ways to select @@@j@@@ balls is the number of permutations of @@@n@@@ elements from @@@N@@@, @@@N!/ (N-j)!@@@.
- Out of these how many have the @@@j@@@-th ball white? Count first the number of ways to select a given white ball (one particular ball out the @@@K@@@ distinct white balls). The number of ways to do that is the number of permutations of @@@j-1@@@ elements (first @@@j-1@@@ balls) out of @@@N-1@@@ elements (all balls, except for given white ball, reserved to be selected as @@@j@@@-th ball). That is, @@@(N-1)!/(N-j)!@@@ ways to have @@@j@@@-th ball white. Since we have @@@w@@@ distinct white balls, there are @@@K (N-1)!/(N-j)!@@@ ways to select @@@j@@@-th as white.
Therefore the probability that the @@@j@@@-th is white is the ratio of the two numbers above, @@@K/N=K/(b+K)@@@. So probability that @@@j@@@-th is white is the same for all @@@j@@@.
To get that in an intuitive way with no calculation, observe that any sequence of @@@j@@@ balls selected with @@@{\bar K}@@@ white and @@@{\bar b}@@@ blacks (e.g “wwbwbbw” or “bbbwwww”) is equally likely. A sequence beginning with a “w” can be reversed to give a sequence ending with a “w”, and therefore the probability of starting with “w” and ending with a “w” is the same.
Finally, what about independence? Suppose we know that the @@@1@@@-st is white. Then the probability that the @@@j@@@-th is white is simply the probability that when selecting @@@j-1@@@ (the remaining balls required to get to a total of @@@j@@@), out of @@@K-1@@@ white and @@@b@@@ black, the last is white. But from the last part, we know that this is @@@(K-1)/(N-1)@@@. Therefore,
$$ P(j \mbox{-th white}| 1\mbox{-st white}) = \frac{K-1}{N-1} < \frac{K}{N}= P(j\mbox{-th white}),$$therefore the events are not independent. This is intuitively clear: knowing the first is white, there is lower chance now of selecting @@@j@@@-th as white, as there are less white balls to choose from. Let’s record the results for future reference. When selecting without replacement @@@n@@@ balls from a jar containing a total of @@@N@@@ balls, @@@K@@@ white and @@@N-K@@@ black, then for any distinct @@@j,k\in \{1,\dots,N\}@@@ we have
\begin{align}
&P(j\mbox{-th ball is white})=\frac{K}{N}
\nonumber
&P(j\mbox{-th ball and }k\mbox{-th ball are white}) = \frac{K-1}{N-1}\frac{K}{N}
\end{align}
Later is better than never, they say. Here is a \video{\myweb/Independence/Independence.html}{video illustrating the idea of independence geometrically}.
We generalize Definition 1:
Events @@@A_1,A_2,\dots@@@ are independent with respect to the probability measure @@@P@@@ if for any finite subset @@@I\subset\N@@@,
$$P(\cap_{i\in I}A_i)=\prod_{i\in I} P(A_i).$$Be extra careful with this definition. It essentially means that knowing any combination of them occurred (or not) does not affect the probability of any of the remaining ones. Another closely related and important notion is of conditional independence. Sometimes we have events which are not independent under our underlying probability measure @@@P@@@, but are independent under some conditional probability measures (remember! conditional probability measures are bonafide probability measures). More specifically, we say that events @@@A_1,\dots,A_n@@@ are conditionally independent on @@@B@@@, meaning that under the probability measure @@@P(\cdot | B)@@@, the events @@@A_1,\dots,A_n@@@ are independent.
The analogous result to Proposition Proposition 1 holds in the case of an arbitrary number of events. That is, if @@@A_1,A_2,\dots@@@ are independent, then so is any sequence with some (all) replaced by their complements.
Let’s revisit Example Example 3. We showed that @@@A@@@ and @@@B@@@ are independent, and so are @@@B@@@ and @@@C@@@ and @@@C@@@ and @@@A@@@. What about @@@A,B@@@ and @@@C@@@? Observe that @@@A \cap B \cap C@@@ is contained in Aron’s toss is different from Aron’s toss, an empty event, so @@@P(A \cap B\cap C)=0 \ne 1/8 = P(A)P(B)P(C)@@@. After looking at this a little you should understand: knowing that @@@A@@@ and @@@B@@@ occurred guarantees @@@C@@@ cannot occur, so probability of @@@C@@@ conditioned on @@@A\cap B@@@ is not the same as probability of @@@C@@@ and therefore @@@C@@@ is not independent of @@@A \cap B@@@.
People tend to confuse independent events with disjoint events. These don’t go hand-in-hand, unless one of the events has measure zero. Remember: being independent means conditioned probability, whenever can be defined, is same as original.
Suppose that @@@A@@@ and @@@B@@@ are independent and disjoint. Show that @@@P(A)=0@@@ or @@@P(B)=0@@@.
Suppose that @@@A@@@ and @@@B@@@ are independent with @@@P(A)=0.1@@@ and @@@P(B)=0.8@@@. What is @@@P(A^c \cup B^c)@@@?
Very often we use terms like “independent coin/die tosses”, “independent games”, etc. Note that none of these are actual events (in the probabilistic sense because - of course - coin tosses are considered by many as major historical events), but rather parts or components of some bigger experiment. When saying that we have a sequence of independent coin tosses, we mean the following:
- Each coin toss is part of a big experiment whose actual sample space are all possible sequences of Heads and Tails.
- Any sequence of events, each of which is determined (described/is a function of) the outcome of one individual toss, are independent.
For example, if I’m telling you that a sequence of coin tosses are independent, I mean that events like “first toss is Heads” and “twelfth toss is Heads” and “hundredth toss is Heads” are independent. In a more general setting, saying that the revenue of Walmart this quarter and the number of math seminars in UConn next semester are independent, we mean that events like “Walmart’s revenue this quarter will exceed @@@200@@@M USD” and “there will be less than 20 math seminars in UConn next semester” are independent (this was written a long time before the COVID-19 pandemic).
Consider a sequence of independent coin tosses. The probability of Heads in each toss is equal to some constant @@@p@@@. What is the probability that the first Heads will appear after more than @@@n@@@ tosses?
Note that the event we are considering here is all first @@@n@@@ tosses are Tails, the intersection of “first is Tails”, “Second is Tails”,…,”@@@n@@@-th is Tails”. Since we assumed the tosses are independent, we conclude that all these are independent, so the probability of their intersection is the product of the respective probabilities and is equal to @@@(1-p)^n@@@.
The following example is what makes headlines every day: improbable events.
Let’s suppose that both I and my friend agree to each pick a random integer between @@@1@@@ and @@@1000@@@.
- What is the probability we choose the same number? @@@1/1000@@@.
- Suppose instead now that besides me a thousand people are choosing numbers independently. What is the probability that at least one will choose mine?
He is a wrong answer: @@@1@@@, because @@@1@@@ in a @@@1000@@@ choices is correct and we have @@@1000@@@ choices. Of course this claim is wrong: it is possible that all choices will miss the number I picked.
Let’s rephrase everything in terms of probability. We are interested in @@@P(A)@@@, where @@@A@@@ is the event “at least one of the other choices is the same as mine”, and where @@@A=\cup_{i=1}^{1000} A_i@@@, with @@@A_i@@@ being the event “@@@i@@@-th person chooses my number”. Clearly @@@P(A_i) = 1/1000@@@. However, @@@P(A)\ne \sum_{i=1}^{1000} P(A_i)@@@ because… the @@@A_i@@@’s are not disjoint.
- What to do? How to compute @@@P(A)@@@? Of course, we can try the inclusion/exclusion formula, but this is very tedious. Let’s use a simple trick. What is the complement of @@@A@@@? none wins. By deMorgan’s laws, this is simply @@@\cap_{i=1}^n A_i^c@@@. Now independence comes to the rescue: As the @@@A_i@@@’s are independent, so are their complements, and therefore the probability of the latter intersection is simply the product of the probabilities. This gives
Therefore
$$ P(A) = 1- P(A^c) = 1- (1-0.001)^{1000} = 0.632.$$That is, the probability of at least one having a correct guess is about @@@63\%@@@, very different from @@@1@@@, but still pretty large.
Summarizing.
- The (extremely) useful technique we observed was switching to complement to convert a problem about a union to a problem about intersection, and then utilizing independence to compute the probability of intersection.
- The subtlety of independence is that probabilities don’t just add up, because independent events with nontrivial probabilities are not disjoint.
- We saw that although each event (guessing correctly) had very low probability, the probability that at least one occurs is pretty large. When we take a huge number of such improbable and independent events the probability of at least one occurring crawls up to @@@1@@@. This is very substantial. Did anything very unlikely ever happen to you? As this example illustrates, this is bound to happen eventually. More on this? Here’s a nice article from Slate
Finally, something for sports lovers.
The Hartford Whalers are playing the Boston Bruins in a series of best of seven games: the first team to win four games wins the series. Let’s say (just to make it hard) that Hartford Whalers have probability @@@p@@@ of winning each game, independently of the other. What is the probability that the whalers will win the series?
We know that the answer is a function of @@@p@@@, and not just any function of @@@p@@@. A configuration of game leading to Whalers winning (like WBWWBBW) has probability of the form @@@p^4@@@ (for four games Whalers won) times @@@(1-p)^\ell@@@ (for the games the Bruins won), where @@@\ell=0,1,2,3@@@. Therefore the probability of Whales winning is a polynomial @@@g@@@ of the form
$$g(p) = p^4 + c_5 p^4 (1-p) + c_6 p^4 (1-p)^2+ c_7 p^4 (1-p)^3,$$where the constants @@@c_5,c_6,c_7@@@ represent the number of configurations leading to a @@@5@@@ games series, @@@6@@@-game series and @@@7@@@ game series (and do not depend on @@@p@@@!). Now
- @@@c_5=4@@@: Whalers winning fifth game, Bruins winning exactly one of the first four games);
- @@@c_6=\binom{5}{2}=10@@@: Whalers winning sixth games, Bruins winning exactly two of first five games; and
- @@@c_7 = \binom{6}{3}=20@@@: Whalers winning seventh game and Bruins winning exactly three of first six games.
Therefore
$$ g(p) = p^4 (1+ 4 (1-p) +10 (1-p)^2 + 20 (1-p)^3).$$Sanity check: for @@@p=1/2@@@ we should get @@@g(p)=1/2@@@. Indeed we have
$$ g(1/2) = \frac{1}{16} ( 1+ 2 + 10/4 + 20/8)=8/16=1/2.$$Let’s do an example with conditionally independent events which are not independent.
The proportion of households with toddlers is @@@20\%@@@. For households with toddlers, the probability of an accident leading to urgent care visit is @@@10\%@@@ each day, independently of other days. For houses without toddlers, the probability of an accident leading to an urgent care visit is @@@4\%@@@ each day, independently of other days.
Let’s parse this. Let @@@B@@@ be the event “household has toddler”. We are given @@@P(B) = .2@@@. Let @@@A_1,A_2,\dots,A_7@@@ be the events “urgent care visit on day 1”, …,”urgent care visit on day @@@7@@@”. We are also given that
- @@@P(A_i|B) = 0.1,~i=1,\dots,7@@@, and @@@A_1,\dots,A_7@@@ are independent under @@@P(\cdot | B)@@@.
- @@@P(A_i|B^c)=0.04@@@ for @@@i=1,\dots,7@@@, and that @@@A_1,\dots,A_7@@@ are independent under @@@P(\cdot | B^c)@@@.
The first question I’d like to ask is this: are @@@A_1,\dots,A_7@@@ independent under @@@P@@@? Let’s check. We will show that they are not. All it takes is to show that @@@A_1@@@ and @@@A_2@@@ are not independent under @@@P@@@. By the total probability formula, for any @@@i=1,\dots,7@@@
$$ P(A_i)= P(A_i|B) P(B) + P(A_i|B^c) P(B^c),$$and therefore
$$ P(A_i) = 0.1*0.2 + 0.04*0.8 = 0.02+0.032=0.052.$$That is,
$$ P(A_1)P(A_2)=0.052^2 =0.002704.$$Now what is @@@P(A_1 \cap A_2)@@@? Well, by the total probability formula and conditional independence
$$\begin{align*} P(A_1 \cap A_2) &=P(A_1 \cap A_2 | B) P(B)+ P(A_1 \cap A_2 | B^c) P(B^c)\\ & = 0.1^2 0.2+ 0.04^2*0.8= 0.00328 \end{align*}$$So the probability of the intersection is larger than the product of the probabilities: the events are not independent. We can also use this to calculate conditional probability:
$$ P(A_2|A_1) = P(A_1 \cap A_2) / P(A_1) = 0.063.$$In words, knowing the household had to go to urgent care on day 1 increases the probability it will go on day two. In fact, this can be also viewed as increasing the probability that the household has a toddler (event @@@B@@@). Let’s calculate this using Bayes’ formula:
$$P(B|A_1) = \frac{P(A_1|B)P(B)}{P(A_1)}= \frac{0.1*0.2}{0.1*0.2 + 0.04*0.8}= 0.38.$$The probability of having a toddler is almost doubled, just because of this additional information.
Finally, consider the following. I’m telling you that in the last seven days there was NO urgent care visit. What is the probability that the household has a toddler? That is, what is @@@P(B| \cap_{i=1}^7 A_i^c)@@@? To ease notation, let @@@A=\cap_{i=1}^7 A_i^c@@@. Then using complements and the presumed conditional independence, we have
$$ P(B|A) = \frac{ P(A|B) P(B) }{ P(A)}=\frac{0.9^7 *0.2}{0.9^7*0.2 + 0.96^7*0.8}=0.134,$$In other words, if the household does not visit within a week, the chances it has a toddler are @@@13.4\%@@@, smaller than the proportion of households with a toddler.
Problems
A raffle has @@@n > 2@@@ tickets. Suppose there are two marked prizes, and I buy two tickets. Are the events “I won first prize” and “I won second prize” independent?
I’m participating in 10 different auctions, and have probability of winning each @@@0.2@@@. Assuming all auctions are independent, what is the probability I will win at least two of them?
Let’s assume that each day the probability it will rain is @@@20\%@@@, independently of the other days, and that each day the probability it will be windy is @@@60\%@@@, independently of other days and the rain. What is the probability I will have to wait at least a @@@k@@@ days until a day which is neither rainy nor windy?
Every time I go to the supermarket, at checkout I can push the button to play a game. Assuming that the probability of success in each game is constant, and all games are independent. I don’t know what the probability of winning is.
- My first win was after @@@7@@@ games. Find a value of the probability of winning in a game which maximizes the probability of this event.
- Revise your answer to part 1 assuming in addition that the second win was @@@4@@@ games after the first win.
- Can you generalize your answer to the case where first win was after @@@k_1@@@ attempts, second after additional @@@k_2@@@ attempts, …, @@@n@@@-th after @@@k_n@@@ additional attempts ?
How important is a head start? You and I are participating in a quiz show. In the show, we take turns and at each turn each participant has to answer a question. The first to give a correct answer wins. We assume that the answers to the questions are independent of each other, and that for each player the probability of answering a question correctly is fixed.
- If each player answers each question correctly with probability @@@p@@@, what is the probability that the one who played first wins?
- Suppose we were told that the probability that the second player wins is @@@1/2@@@. Find a relation between the probability that the first player answers a question correctly and the probability that the second player answers a question correctly.
In order to get into a bowl game, UConn Football has to win at least @@@5@@@ of @@@9@@@ games. Let’s assume that they are equally likely to win or to loose each game, independently of the other games. We have two fans who are also probability experts, but each with different information provided:
- Fan A was at the first game, and witnessed UConn winning. Based on this information the probability UConn gets into a bowl game is @@@p_A@@@.
- Fan B was in comma all season, and when woke up asked the doctor if she knew how the season ended. All the doctor could tell was that UConn had won at least one game in the regular season (she could not tell which). Fan B then calculates the probability that UConn got into a bowl game and finds it to be @@@p_B@@@. Which is larger @@@p_A@@@ or @@@p_B@@@?
We continue Example Example 9. We assume two teams, say Whalers and Bruins, are playing a best-of-seven series. First team to win four games wins the series. We assume that in each game each team is equally likely to win and that the games are independent.
- Find the probability the series will end after exactly @@@4@@@ games, @@@5@@@ games, @@@6@@@ games and @@@7@@@ games.
- Find the probability that the Whales win, conditioned on having won at least @@@1@@@ game, @@@2@@@ games, and @@@3@@@ games respectively.
In order to guarantee smooth operation, the University has three web-servers. Each can handle the traffic by itself, and the probability that each is not working on a given day is @@@15\%@@@, independently of the other servers. Assuming that the system is up, what is the probability that only one server is functioning?
- The probability a player makes a free throw in home game is 65%, independently of everything else.
- The probability a player makes a free throw in an away game is 50%, independently of everything else.
- Half of the games are home games.
- What is the probability that when the player throws two free throws she will make both?
- Assuming the player makes the first free throw, what is the probability she will make the second? Are the two events independent?
- Assuming that the player made both free throws, what is the probability it was a home game?
We’re playing Chinese Whispers. I choose a digit, @@@0@@@ or @@@1@@@, and whisper it to a person sitting next to me. With probability @@@p\in (0,1)@@@, the person hears it correctly and with probability @@@1-p@@@ hears the wrong digit. That person whispers the the next person, etc. repeating the pattern each time independently. What is the probability that the @@@n@@@-th person will hear the correct digit? Find the limit when @@@n\to\infty@@@. Why aren’t you surprised?