Counting

Chapter 1. Counting

Motivation

장(章)의 제목이 사소하게 들릴 수 있지만, ‘수를 세는 법을 배우는 것’은 생각만큼 간단하지 않다는 점을 알아두어야 한다. 복잡한 셈하기 문제들은 많은 확률 계산의 토대가 된다. 즉, 유리한 결과의 경우의 수와 전체 결과의 경우의 수를 세어야 하기 때문에, 이 주제를 출발점으로 삼는 것은 타당하다.

이 장에서 분명히 하겠지만, 셈하기 문제는 매우 빠르게 복잡성이 커지는 경향이 있다. 그러나 셈하기는 가장 배우기 보람 있는 개념 가운데 하나이다. 꾸준한 노력과 체계적인 접근만 있다면, 곧 ‘그리 단순하지 않은 셈하기’의 단순한 아름다움을 즐길 수 있을 것이다.

Sets

여기서는 집합론을 깊이 다루지는 않을 것이다(흥미로운 사실 하나는, ‘set’이라는 단어가 영어 사전에서 가장 많은 정의를 가진 단어 중 하나라는 점이다). 그러나 이 장에서 다룰 표기법은 책 전반에 걸쳐 자주 등장할 것이므로, 초반에 익숙해져 두는 것이 중요하다.

  • 집합 (Set) 과 부분집합 (Subset) – ‘집합’은 사물들의 모임으로 정의된다(즉, 집합은 문자, 숫자, 이름 등으로 구성될 수 있다). ‘부분집합’은 ‘집합 안의 집합’으로 정의된다. 집합 A가 집합 B의 부분집합이라는 것은 오직 집합 A의 모든 원소(집합 안의 각 객체를 원소라고 부른다)가 집합 B 안에도 있을 때이다.
  • 공집합 (Empty Set) – 아무것도 포함하지 않는 집합으로, ∅로 표시한다.
  • 합집합 (Union) – ‘A ∪ B’는 영어로 ‘A union B’를 의미한다. 이는 A 또는 B라는 것을 간단히 표현한 것이다. 이 집합(그렇다, 두 집합의 합집합도 하나의 집합이다)은 집합 A에 속한 원소들, 집합 B에 속한 원소들, 그리고 집합 A와 B 모두에 속한 원소들을 포함한다. 합집합을 A와 B의 전체 벤다이어그램이라고 생각하면 된다.
  • 교집합 (Intersection) – ‘A ∩ B’는 영어로 ‘A intersect B’를 의미한다. 이는 A 그리고 B라는 것을 간단히 표현한 것이다. 교집합은 A와 B 벤다이어그램에서 겹치는 가운데 부분이라고 생각하면 된다.
  • 여집합 (Complement) – 이것은 두 가지 방식으로 표기되는 것을 보게 될 것이다. Ac 와 ¯A 둘 다 영어로 “A complement”를 의미한다. 이는 A가 일어나지 않을 때를 간단히 표현한 것이다. 이 집합은 A에 속하지 않는 모든 것으로 이루어진다.
  • 서로소 집합 (Disjoint) – 겹치는 부분이 전혀 없는 집합들이다. 서로소 집합의 교집합은 공집합이라고 말할 수 있다(두 서로소 집합의 교집합에는 원소가 없으며, 이는 정의상 공집합이다).
  • 분할 (Partition) – 집합 A를 생각하고, 그 안에 부분집합들 A1, A2, A3, A4 … An을 생각해보자. 이 부분집합들이 서로소(즉, 겹치지 않음)이고 A 안의 가능한 모든 결과를 다 포함한다면, 이들은 집합 A를 분할한다고 한다. 정치에서의 단순한 예를 들어보자. 단순화를 위해, 잠시 동안 미국의 두 주요 정당인 민주당과 공화당만 존재한다고 가정하자. 이 단순화된 상황에서, 모든 민주당 정치인들의 집합 D와 모든 공화당 정치인들의 집합 R은 전체 미국 정치인들의 집합 P를 분할한다고 말할 수 있다. 그 이유는 한 정치인이 동시에 민주당과 공화당일 수는 없기 때문에(겹치는 부분이 없으므로 집합은 서로소이다), 민주당과 공화당이(이 단순화된 예에서; 실제로는 다른 정당들도 존재하지만 덜 대중적이다) 전체 미국 정치인 집합을 포괄하기 때문이다.

지금까지 다룬 주제들은 비교적 이해하기 쉬운 것들이다. 이제 예시를 통해 이러한 개념들을 확실히 해 보자. 합집합, 교집합, 여집합은 모두 다시 집합이라는 점을 기억하자.

Example 1.1 (Statistics at Harvard)
  • 집합 A를 하버드대학교의 모든 통계학 전공 학생들의 집합이라고 하고, 집합 B를 Winthrop 하우스(하버드의 특정 기숙사로, 학생들은 호그와트의 4개 하우스처럼 12개의 기숙사 중 하나에 배정된다)에 거주하는 모든 학생들의 집합이라고 하자.
  • 집합 A와 B의 합집합은 무엇인가? 영어로 말하면 A 또는 B이다. 따라서 합집합은 통계학 전공이지만 Winthrop에 살지 않는 학생들, Winthrop에 살지만 통계학 전공이 아닌 학생들, Winthrop에 살면서 통계학 전공인 학생들을 모두 포함한 집합이다. 다시 말해, ‘하버드 통계학 전공 학생’과 ‘하버드 Winthrop 기숙사 학생’이라고 표시된 두 개의 원으로 이루어진 단순한 벤다이어그램을 떠올리면 도움이 된다.
  • 집합 A와 B의 교집합은 무엇인가? 영어로 말하면 A 그리고 B이다. 따라서 교집합은 Winthrop에 살면서 동시에 통계학 전공인 모든 학생들의 집합이다. 시각적으로는 벤다이어그램의 겹치는 부분을 떠올리면 된다. 합집합 A ∪ B는 교집합 A ∩ B를 포함한다는 점에 주의하라. 형식적으로 교집합은 합집합의 부분집합이다.
  • 집합 A의 여집합은 무엇인가? 영어로 말하면 A가 일어나지 않는 경우이다. 따라서 이는 하버드 통계학 전공이 아닌 모든 사람들의 집합이다.
  • 마지막으로 서로소 집합의 예를 생각해 보자. 집합 P를 미국 역사상 모든 대통령들의 집합이라고 하고, 집합 M을 이름이 ‘Matt’인 모든 사람들의 집합이라고 하자. 이 두 집합은 겹치지 않으므로 서로소이다. 대통령 중에 이름이 Matt인 사람은 없고, Matt라는 이름을 가진 사람이 대통령이 된 적도 없기 때문이다. 공식적으로는 교집합이 공집합이므로 서로소이다. 즉, ‘미국 대통령이면서 이름이 Matt인 사람’이라는 집합에는 아무도 속하지 않는다. 시각적으로는 겹치지 않는 두 개의 원으로 이루어진 벤다이어그램이 된다.

 

Naive Probability

여기까지 읽고 나면 아마도 이 장의 제목에 대해 궁금해질 수 있다. 왜 겉보기에 사소해 보이는 ‘수를 세는 것’이 이 책에서 한 자리를 차지하는 것일까? 우리가 논의할 대부분의 개념과 마찬가지로, 모든 것은 확률에서 시작되기 때문이다.

Concept 1.1 (Naive Definition of Probability):

사건이 일어날 확률은, 모든 결과가 동일한 가능성을 가진다고 할 때, 다음과 같이 정의된다.

P(Event) = \[ frac{number of favorable outcomesn}{ total number of possible outcomes} \]

When we are working with probabilities, our notation will be . In english, this means ‘the Probability that event  occurred.’ So, if  is the event of flipping heads in one flip of a fair coin, then . The above, then, is just notating ‘the probability of an event occurring.’ We will frequently use the set notation discussed in the previous section when working with probabilities, which is why that section was placed first!

This ‘Naive Definition’ is a reasonable place to start, because it’s likely how you have calculated probabilities up to this point. Of course, this is not always the correct approach for real world probabilities (hence the name ‘naive’). For example, imagine a random person running for President of the United States. Using the naive definition of probability, their probability of winning the election is .5; one favorable outcome (winning) divided by the total number of outcomes (winning or losing). Clearly this is a simplified approach; different outcomes often have different probabilities associated with them, so it’s important to realize when the naive definition does and does not apply. In the President case, the ‘losing’ outcome is much more likely, so the naive ‘weighting’ scheme does not apply.

Anyways, we often apply the naive definition (correctly, hopefully) automatically: if someone asks you the probability that a fair die rolls a 6, you would say  because you quickly reason that there are six outcomes (rolls 1 to 6) and one that is favorable (rolling a 6), so the overall probability is just . In this example, it’s very easy to count the number of favorable outcomes and number of total outcomes; however, counting the number of outcomes can quickly become much more complex (as we soon will see). That is why this chapter is necessary: to count the number of total and favorable outcomes, as outlined in the formula above, for very complicated problems. We’ll now dive into some tools that are useful for counting outcomes in certain, specific scenarios.

Concept 1.2 (Multiplication Rule):

To understand the Multiplication Rule, visualize a process that has multiple steps, where each step has multiple choices. For example, say that you are ordering a pizza. The first ‘step’ is to select a size, and this ‘step’ has multiple ‘choices’ or ‘outcomes’: small, medium, large, etc. The next ‘step’ is to pick toppings, and again you have multiple ‘choices/outcomes’: pepperoni, sausage, peppers, etc. Another step might be delivery or no delivery. You can see how this problem grows, and counting the total number of outcomes can become tedious.

The idea here is that we would like to count the number of possible pizzas you could order (i.e., one possibility is ‘small mushroom delivery,’ another is ‘large pepperoni not delivery,’ etc.) in a structured way, since we will need to be able to count these types of outcomes for naive probability problems (for example, what’s the probability that a randomly ordered pizza is a large and has sausage? We first need to count how many possible pizzas there are, or the denominator in the formula for naive definition of probability). So, the multiplication rule states that, in a process where the first step has  choices, the second step has  choices, all the way up to the  step that has  choices, the total number of possible outcomes is just the multiplication of the choices: .

Let’s solidify this concept with the pizza example. Say that you have 3 choices of size: small, medium and large. You then have 4 choices of toppings: pepperoni, meatball, sausage, and extra cheese. Your final choice is delivery or pickup. All in all, you have these three sets of choices:

  1. Size (small, medium, or large)
  2. Topping (pepperoni, meatball, sausage, or extra cheese)
  3. Order Type (delivery or pickup)

Using the multiplication rule, we can easily count the number of distinct pizzas that you could possibly order. Since there are 3 choices for size, 4 choices for toppings, and 2 choices for pickup, we simply have  different pizza options (applying the formula from above, we have  and ).

Now that we have counted the total of number of possible pizzas, it is easy to solve various probability problems. For example, consider finding the probability of ordering a large sausage if you randomly order a pizza: there are 2 favorable outcomes where you get a large sausage (large sausage not delivered and large sausage delivered) and 24 total outcomes/pizzas, so the probability is . Note that we are randomly ordering a pizza, so that each of the possible 24 pizzas has an equal chance of being selected; this allows us to continue to work in a naive framework.

This is far faster than writing out all of the possible combinations by hand, and 24 combinations isn’t even that extensive; imagine if we had 10 options for each choice! The multiplication rule can greatly simplify a problem.

Let’s use R to generate all of these combinations and see if the empirical result matches our analytical work.

 

#define the vectors for size, topping and order
size = c("S", "M", "L")
topping = c("pepperoni", "sausage", "meatball", "extra cheese")
order = c("deliver", "pick-up")

#keep track of the pizzas
pizzas = character(0)

#iterate over each value for each variable
for(i in 1:length(size)){
  for(j in 1:length(topping)){
    for(k in 1:length(order)){
      
      #create a pizza
      pizzas = rbind(pizzas, c(size[i], topping[j], order[k]))
    }
  }
}

#print out the pizzas; should have 24
pizzas
##       [,1] [,2]           [,3]     
##  [1,] "S"  "pepperoni"    "deliver"
##  [2,] "S"  "pepperoni"    "pick-up"
##  [3,] "S"  "sausage"      "deliver"
##  [4,] "S"  "sausage"      "pick-up"
##  [5,] "S"  "meatball"     "deliver"
##  [6,] "S"  "meatball"     "pick-up"
##  [7,] "S"  "extra cheese" "deliver"
##  [8,] "S"  "extra cheese" "pick-up"
##  [9,] "M"  "pepperoni"    "deliver"
## [10,] "M"  "pepperoni"    "pick-up"
## [11,] "M"  "sausage"      "deliver"
## [12,] "M"  "sausage"      "pick-up"
## [13,] "M"  "meatball"     "deliver"
## [14,] "M"  "meatball"     "pick-up"
## [15,] "M"  "extra cheese" "deliver"
## [16,] "M"  "extra cheese" "pick-up"
## [17,] "L"  "pepperoni"    "deliver"
## [18,] "L"  "pepperoni"    "pick-up"
## [19,] "L"  "sausage"      "deliver"
## [20,] "L"  "sausage"      "pick-up"
## [21,] "L"  "meatball"     "deliver"
## [22,] "L"  "meatball"     "pick-up"
## [23,] "L"  "extra cheese" "deliver"
## [24,] "L"  "extra cheese" "pick-up"

 

#count total number of large sausages; should get 2
#   we divide by 3 because rows are length 3, and we want
#   to convert back to number of rows (i.e., number of pizzas)
length(pizzas[pizzas[ ,1] == "L" & pizzas[ ,2] == "sausage"])/3
## [1] 2

Now let’s try to build some intuition as to why the multiplication rule works. Imagine flipping a fair coin and simultaneously rolling a fair, 6-sided die and marking down the result as one outcomes (i.e.,  means we flipped heads and rolled a 6). Using the multiplication rule, it is easy to count the number of possible outcomes: there are two outcomes to the coin flip and 6 possible outcomes to the die roll, so there are  possible outcomes. Now, consider why this makes sense. Imagine ‘locking in’ the coin as a ‘Heads’: there are then 6 possible outcomes, Heads and every roll of the die. Now imagine ‘locking in’ the coin as a ‘Tails’: as before, there are 6 outcomes, Tails and each roll of the die. Of course, these two sets together represent all 12 of the possible outcomes, and we can envision this process as a sort of multiplication: when we ‘locked in’ the results of the coin, we essentially had 2 sets (the number of sides on the coin) of size 6 (the number of sides on the die), so we multiplied these numbers for the total number of outcomes. If we had a coin with more sides (ignoring the fact that it wouldn’t be a ‘coin’ anymore) we would still be performing a multiplication (just imagine ‘locking in’ the potential outcomes for the coin, one at a time). This concept extends to processes with more steps (you can think of just ‘locking in’ multiple coins, for example) and thus we have the multiplication rule.

 

Anyways, we could easily make our pizza problem from earlier much larger, as we hinted at above (selecting from 20 different toppings, etc.). However, we could also make the problem much more complicated: what if you could order multiple toppings? What if you wanted to order multiple pizzas, with the constraint that no two pizzas are identical? It is unclear how the multiplication rule would neatly apply to these paradigms. In fact, to answer these questions, we need to learn more about counting.

Concept 1.3 (Factorial):

You’ve likely seen this before: a number followed by an exclamation point. As you probably know, 5! doesn’t mean that 5 is excited. Instead, it means , or the the product of all positive integers less than or equal to 5.

You may have used the factorial for simple arithmetic calculations, but we are introducing it here in the context of counting. How could this formula be an effective tool for solving counting problems? Earlier, we considered the problem of counting the number of outcomes for a process with multiple steps, where each step has multiple choices (the multiplication rule). Consider now the problem of counting the number of ways to order elements; specifically, the number of ways to order the letters A, B, and C. We will define a specific ‘arrangement’ or ‘order’ as a permutation (i.e.,  is one permutation), so here we are trying to count the total number of permutations.

You could likely figure this out by just writing out all of the permutations:

It’s clear that there are 6 permutations (they are all listed above, and there is no other way to order the three letters). However, you might not always be able to write the permutations out so conveniently: what if you had to do the same for all 26 letters in the alphabet? In that case, if you didn’t feel like writing out the 26 letters over and over and over, you could use the factorial for a more elegant solution. Since there are three distinct letters in our original example, the number of permutations when ordering A,B and C is 3! (which cleanly comes out to , an answer that we showed to be correct above by writing out all of the permutations).

So, the factorial can be used to count permutations. Let’s build some intuition as to why this holds: think about actually ordering the letters A,B,C. There are three ‘slots’ that the letters could take: they could either be the first letter in the sequence, the second letter or the third letter (i.e., for the permutation  is the first letter,  is the second letter and  is the third letter). How many different letters can we pick for the first ‘slot’ (i.e., how many letters can we pick to be the first letter in the sequence)? Well, since no letters have been picked yet, there are 3 choices (we can pick A, B or C). How about the second slot? Since we know that 1 letter has been picked for the first slot (A, B or C, it doesn’t matter which) there are now 2 letters left that can go in the second slot. What about the last slot? We know 2 letters have already been picked for the first two slots, so there is only 1 remaining letter.

Since we want the total number of permutations, we employ the multiplication rule (notice how the concepts are already building on themselves; we use the multiplication rule to understand the factorial) and multiply each of the descending ‘numbers of letters left,’ which is the same as taking the factorial of 3. So, if we asked the same question but this time wanted to know how many ways there are to order the entire alphabet, the answer would be . Clearly, employing the same brute force method from above (writing out all of these permutations) would be near impossible. If you could write one permutation every second, it would still take over  years to write all of the permutations, much longer than the current age of the known universe.

We can confirm this concept by counting all of the possible orderings of the letters A through G with some code in R. There are  permutations, so finding the combinations with a brute force method (i.e., writing them all out) is not desirable. A computer, however, can generate the combinations quickly. Here, we will use the permn function from the combinat package (we will see other ways to generate these types of combinations later in this chapter; note that we use combinat::permn, which is our way of telling R to use the permn function from the combinat package; there are other packages that have different functions named permn, so we have to specify which one we want!).

 

#generate all of the possible permutations
perms = combinat::permn(c("A", "B", "C", "D", "E", "F", "G"))

#look at the first few permutations
head(perms)
## [[1]]
## [1] "A" "B" "C" "D" "E" "F" "G"
## 
## [[2]]
## [1] "A" "B" "C" "D" "E" "G" "F"
## 
## [[3]]
## [1] "A" "B" "C" "D" "G" "E" "F"
## 
## [[4]]
## [1] "A" "B" "C" "G" "D" "E" "F"
## 
## [[5]]
## [1] "A" "B" "G" "C" "D" "E" "F"
## 
## [[6]]
## [1] "A" "G" "B" "C" "D" "E" "F"

 

#should get factorial(7) = 5040
length(perms)
## [1] 5040

Anyways, despite this long-winded explanation, hopefully it is clear that the factorial can be used (by employing the multiplication rule) for counting ways to order, or permute, objects. This allows us to solve more complex counting problems, but we now move to a concept that will expand our horizons even further.

Concept 1.4 (Binomial Coefficient):

This is perhaps the most useful counting tool we will see in this chapter. It is represented by , which in english is pronounced ‘ choose .’ The full formula, defined in terms of factorials (which we just learned about; again, notice how we need previous concepts to define new concepts), is:

When we learned about the multiplication rule and the factorial, we learned that these formulas gave us a way to solve some sort of counting problem. The same holds true for this concept. Quite simply, the binomial coefficient gives the number of ways that x objects can be chosen from a population of n objects.

Perhaps the best way to visualize this type of problem is counting the number of ways to form committees out of a larger group. Say you have a population of 300 parents at the local elementary school, and you want to choose 10 parents to be a part of the PTA (Parent Teacher Association). Since we have a population  and we want to choose  parents from the population,  gives the number of ways that we can pick 10 parents from the pool of 300, or the number of combinations possible for the PTA committee. Clearly, this number is quite large, which means that listing out the possible combinations would not be feasible (note that we define one possible iteration – i.e., one possible committee – as a combination, like we defined a permutation above as one possible ordering of objects).

Again, similar to the factorial and the multiplication rule, it’s not immediately clear how the binomial coefficient correctly counts the number of outcomes in this specific situation (choosing a committee out of a group of people). Let’s break it down with a manageable example.

Say we have the 5 letters , and we want to find out how many ways there are to select 3 letters from this set of 5 letters (some of the possible combinations are , etc.). You can frame this in terms of the PTA example above: in this case, we just have  parents and we want to select a committee of  parents.

We already know from the definition of the binomial coefficient that the number of combinations is , but we haven’t yet discussed the intuition behind this result. Let’s review the full formula for the binomial coefficient , or in this case, plugging in  and , to gain some intuition.

Let’s start from the top (literally and figuratively). The numerator  gives us the number of ways we can organize the letters A,B,C,D and E (it turns out there are 120 ways, since ). However, we aren’t asking for the number of ways that we can order the entire set (the permutations), but the number of ways we can choose a certain amount of letters out of the entire set (the combinations). Therefore, this  is counting permutations like ACDBE and EDCBA, where we only want the combinations with 3 letters, not all 5 (from the problem statement, we’re only choosing 3 letters for our ‘committee’). Instead, we can think about this sequence of 5 letters (one permutation) as a sequence of 3 letters followed by a sequence of 2 letters. Of course, we only want the sequence of 3 letters and could care less about the 2 letters left over, so we have to divide out those extra two letters that we don’t want.

So, if someone is putting together the 5 letters in a random order, we essentially want to stop them at 3 letters (this will define a committee of 3). How can we count in this way? Well, once they have picked three letters to order out of all five letters (like BEC or DAB) they still have  possible ways that they can order the remaining letters. If they start with ABC, for example, there are , or , ways to order the remaining letters: DE and ED. The , then, is overcounting what we want (groups of three) by a factor of 2!, so we must divide the  by , which is of course just  or .

Therefore, we are starting from the number of ways that we can pick the 5 letters in a row, and then adjusting for the fact that we are picking the entire sequence of 5 letters instead of the 3 letters that we asked for by dividing by  or , the letters we don’t care about. It’s sort of like dividing by the number of ways to not choose 3 letters.

That accounts for  in the numerator and the  in the denominator of the binomial coefficient. What about the  term? Well, where have we left off in our counting structure? After dividing, we’ve just found the number of ways to choose sets of 3 letters out of the 5 letters. However, we are still overcounting in some fashion.

The key here is that order doesn’t matter (a concept we will explore further in this chapter). The division we just did, , spits out the number of ways to choose 3 letters from A,B,C,D,E. This happens to come out to 60 (you can type factorial(5)/factorial(5 - 3) in R to prove it). We’ll start to write out the combinations. See if you notice any patterns:

Did you catch it? Here, we have written out 12 of the 60 combinations.

Consider the first six: . They all have the same letters in them: A, B and C, the only factor that makes them unique is the order that they are presented in (i.e., ABC and ACB have the same letters but in a different order). This is the same for the second set of six combinations above, but with the letters A, B and D. They are permuted differently but have the same combination.

Think back to one of the original examples for the binomial coefficient: picking people for committees. Does it matter which order you pick the people for the committees in? No, it does not; as long as the same people are in a committee, it does not matter what order we picked them in.

Therefore, if A, B and C are people, then the committees  and  are the exact same committee. That is, picking Alec, then Bob, then Carl forms the same committee as picking Carl, then Alec, then Bob. Since we don’t really care which order they are selected in, this method again overcounts committees: the set of 6 permutations with the letters A, B and C should only count for 1 committee combination.

How do we fix this? Again, by dividing out to adjust for overcounting. Since there are  ways to permute the people in the committee, but again, we don’t want to count these ways, we have to divide by . You can see the intuition in this case: since , and we just saw that 6 committees should have ‘boiled down’ to 1. You can further convince yourself of this by the fact that there are 10 total permutations of these ‘false 6’ (two of which we wrote out) because there are 10 ways to group the letters ABCDE (ABC, ABD, ABE, ACD, ACE, ADE, BCD, BCE, BDE, CDE) and within each group  ways to permute the letters (which we don’t want to count in this case).

So, we divide by the 3!, which completes the formula . It comes out to 10 (in fact, we wrote out the complete answer at the end of the last paragraph) which means that there are 10 ways to select 3 letters from ABCDE, or 10 ways to form a committee of size 3 from 5 different people. We learned that this formula adjusts for over-counting in selecting unnecessary orderings (dividing by  to get us down to just permutations of 3) and over-counting in giving extra ordering combinations (ABC and ACB are the same in this problem structure).

We can confirm this result in R by generating all possible committees of size 3 from 5 total people using the combinations function, which is in the gtools package (again, we use gtools::combinations to tell R that we want the function combinations specifically from the gtools package).

 

#generate the committees (people labeled 1 to 5)
committees = gtools::combinations(n = 5, r = 3)

#should get choose(5, 3) = factorial(5)/(factorial(3)*factorial(2)) = 10 committees
committees
##       [,1] [,2] [,3]
##  [1,]    1    2    3
##  [2,]    1    2    4
##  [3,]    1    2    5
##  [4,]    1    3    4
##  [5,]    1    3    5
##  [6,]    1    4    5
##  [7,]    2    3    4
##  [8,]    2    3    5
##  [9,]    2    4    5
## [10,]    3    4    5

A final exercise to solidify our understanding of the binomial coefficient is to consider  when . Thus far, we have implicitly assumed ; after all, we are choosing  objects from a population of  objects, so we should have that the population of objects is larger than the group we hope to choose. Consider, then, the case when ; this problem is explored further in the practice problems at the end of this chapter (as always, you can quickly check your intuition in R with the choose function).

Anyways, we have now learned how to count processes with multiple steps, as well as how to permute and combine objects. From these simple tools, a world of complexity opens up.

 

Sampling Table

Here is a convenient way to arrange the results that we have discussed thus far (and extend our knowledge to cover more ground), from Blitzstein and Hwang (2014).

 

Order Matters Order Doesn’t Matter
With Replacement
Without Replacement

This table will allow us to think more deeply about the concepts we have learned in this chapter. Before we dive into each section, let’s think about what Order and Replacement mean.

If ‘Order Matters,’ then two outcomes that have the same elements in a different order are counted as different outcomes; if ‘Order Doesn’t Matter,’ they are counted as one outcome. For example, if ‘Order Matters,’ then ABC, CAB, BAC are counted as three separate outcomes, and if ‘Order Doesn’t Matter’ they are counted as one outcome (since they all have the letters A, B and C in them, and it doesn’t matter how they are arranged). Order matters when we care about permutations (recall that permutations count the number of ways to arrange objects). Generally, order doesn’t matter when we consider combinations: in the committee example, at least, you get the same committee no matter what order you picked people in.

If we are picking ‘With Replacement,’ that just means that after we select an object, we put it back into the population and it can be picked again. For example, if we are drawing cards out of a deck and we are sampling with replacement, we put the card back after we draw it and we could then pick it again. Without replacement, we would take the card out of the deck after we picked it, meaning we could not pick the card again (it also makes the deck a little smaller, obviously. We can only draw without replacement from a 52-card deck, well, 52 times. After that, there are no more cards left!).

The entries in the Sampling Table, then, count the number of ways to choose objects from a group of objects (here, we choose  objects from  objects). Adjusting the constraints of ‘Order’ and ‘Replacement’ allows us to define how we choose the objects and essentially moves us around the table. Now, let’s explore these different ways to count.

First, let’s consider the top left box in the Sampling Table. This box wants us to count the number of ways to select  objects from  objects with replacement where order matters. When we put objects back in after picking them (with replacement) we will always be picking from  objects (since we start with  objects, and we replace each object we take). Since every single time we will have  options and we are picking  times, we simply have to multiply  by itself  times (by the multiplication rule), or . Since order actually matters, we aren’t overcounting anything and don’t have to divide by some factor to correct this: we don’t want to throw out the permutations with different orders, because order matters.

The next best box to think about is the bottom right: the binomial coefficient. Here, order does not matter and we are taking objects out of the population after we sample them (without replacement) so it makes sense to just use the binomial coefficient , since this counts the number of ways to select  objects from a population of  objects where we are not replacing the objects and the order that we select the objects in is not important (for more on why the binomial coefficient has these properties, see the explanation in the previous section).

From here, we move to the bottom left box. This looks almost identical to the binomial coefficient except that it doesn’t have the extra  term in the denominator. That’s because, if you recall, we include that  to divide out the overcounting that results from differences in order (i.e., if order doesn’t matter, we want ABC, ACB, CBA, CAB, BCA, BAC to count as 1 outcome, so we divide by 3!). However, since we are in the column ‘Order Matters,’ we actually want to count these different permutations, so we do everything the same way but do not divide by  so as to keep in these specific permutations.

Finally, we will refer to the top right box as the ‘Bose-Einstein’ result, and it is probably the trickiest box to master. Here, we are sampling with replacement, and order doesn’t matter. Let’s consider an explanation adapted from Blitzstein and Hwang (2014). Imagine that we are trying to count the number of ways to put  balls in  boxes, where the balls are all the same (indistinguishable) but the boxes are different (distinguishable). First, consider why this satisfies the constraints ‘sampling with replacement’ and ‘order doesn’t matter.’ Imagine that we put the balls into the boxes one by one. We can think of ‘putting a ball in a box’ as ‘sampling’ a box; whenever we ‘sample’ a box, we just add a ball to it! Since we can put as many balls in any box as we want (i.e., once we put a ball in a box, or ‘sample’ that box, we can still ‘sample’ it again) we are essentially sampling boxes with replacement. Since the balls are identical, order doesn’t matter. These two outcomes are the same: ball 1 goes in box 1 and ball 2 goes in box 2 vs. ball 2 in box 1 and ball 1 in box 2. It only matters how many balls end up in each box, not the order that they were placed there, since the balls are indistinguishable!

At this point, we should just be convinced that this ‘ball-box’ scheme (putting identical balls in non-identical boxes) is the same as sampling with replacement such that order doesn’t matter. Now, we turn to discussing why  actually counts the number of ways to perform this type of sampling. Again, we’re going to borrow an explanation from Professor Blitzstein (you can read a more thorough discussion from Professor Blitzstein here). Imagine when  and  (we have 3 boxes and 5 balls). One possible permutation of the sampling is below:

As you might be able to guess, this diagram shows a sample where we have 3 balls in the first box, 1 ball in the second, and 1 ball in the third (the  marks are balls, and the  marks are the walls of the boxes. The walls ‘overlap,’ so the second  in the sequence above, reading from left to right, is the right-hand wall for the first box and the left-hand wall for the second box).

Now that we can visualize the sampling process in this way, we can actually just count the ways to legally permute the above diagram (this is equivalent to counting the number of ways to perform the sampling, since we already argued that this ‘box and balls’ scheme is the same in structure to the type of sampling we are interested in). The only thing that stays fixed are the two bars on the end (the diagram wouldn’t make sense if there was a  on the end, because we would have one box that wasn’t closed). In this case, after we fix two bars on the ends, we now have 2 bars left over and 5 balls to rearrange within themselves. There are 7 spots for these 7 objects (bars and balls), so if they were all different there would be 7! ways to permute them (by the concept of the factorial). However, the balls are all identical to each other, and the bars are identical to each other (the boxes aren’t identical in the problem, but in this diagram, swapping bars does not change the boxes). Therefore, we have 7! ways to arrange the objects, and then we have to divide out by 5!, the number of ways to arrange the balls within themselves, and 2!, the number of ways to arrange the bars within themselves. This comes out to , which is the same as . Notice how  and ; extending to the general case, then, there are  ways to perform this sampling (the  comes from the fact that we have one less ‘movable bar’ – that is, bars that are not on the ends of the diagram and thus cannot be moved – than boxes in our diagram).

 

The Sampling Table may be the most important concept in this chapter because it links together the tools we’ve learned; therefore, it’s extremely important to understand the concepts and various sampling constraints. You can interact with the ‘Permutations’ Shiny app to solidify your understanding; reference this tutorial video for more.

Click here to watch this video in your browser. As always, you can download the code for these applications here.
 

Ultimately, the best way to develop understanding for the entries in the Sampling Table is to practice. Consider the following problem, which can be solved with methods discussed in this chapter. Both analytical solutions (i.e., with pencil and paper) and empirical solutions (i.e., simulations in R) are presented.

Example 1.2 (Call Me Maybe):

Most responsible iPhone users choose a ‘passcode’ for their device: a chosen security code to prevent others from using their phone. Passcodes are 4 digits long and are can consist of the integers 0,1,…,9.

  1. How many possible passcodes are there?
  2. Define a ‘unique’ passcode as a passcode with 4 distinct digits (i.e., 2947 is unique but 3391 is not). How many unique passcodes are there?
  3. Define an ‘ascending’ passcode as a passcode where each digit is equal to or larger than the previous digit (i.e., 1489 and 1122 are ascending passcodes, but 1948 is not). How many passcodes are both unique and ascending?
  4. How many ascending (not necessarily unique) passcodes are there?
  5. Describe how your answers to this problem relate to the Sampling Table.

 

Analytical Solution:

  1. There are 10 possible selections for each of the 4 digits (0,1…, or 9). By the multiplication rule, there are  possible passcodes.
  2. There are 10 possible choices for the first digit; for the second digit, we can pick any digit except for the digit we picked first, meaning we have 9 choices. Continuing in this way, we have  possible passcodes.
  3. Every possible set of digits that we sample without constraint for a passcode can be ordered to make an ‘ascending passcode,’ but there is only one of these orders that is ‘legal’ (i.e., ascending order. If you sample the digits 4, 6, 9 and 2, the only ordering that makes an ascending passcode is 2469). So, we can adjust our answer to (b) by dividing out all of the  possible orders for the unique passcode (only one is ascending). This yields .
  4. This is the Bose-Einstein paradigm; we are sampling digits from 0 to 9 with replacement (the passcode need not be unique), and order doesn’t matter (i.e., for any 4 digits we sample, there is one correct ascending order to put them in, so sequences with the same digits in different orders ‘represent’ only one outcome). Therefore, there are  of these passcodes.
  5. We start in the top left of the Sampling Table in part (a) and then move counterclockwise. In part (a), we sample digits with replacement (i.e., you can have the passcode 9999) and order matters (i.e., the passcode 2299 is different from the passcode 9922). In part (b), when we introduce uniqueness, order still matters, but now we are sampling without replacement (i.e., once you select a 9, you can not select a 9 again). In part (c), in addition to uniqueness, which determines sampling without replacement, we essentially mandate that order doesn’t matter: if we sample 9218 and 8192, both ‘represent’ the passcode 1289 (the ‘ascending’ rule forces a single order; ‘order not matters’ here means that we will arrange the numbers into a single order). Sampling without replacement with order not mattering gives the binomial coefficient. Finally, in part (d), we remove the unique condition (so sample with replacement) but keep the ascension condition (so order does not matter), which gives us Bose-Einstein.

Empirical Solution:

Since 4-digit phone numbers from 10 possible digits are computationally difficult to work with, we consider only a 4-digit passcode from 6 digits. The permutations function in the gtools package generates permutations of a given size from a given vector, and we can dictate if we want to include repeats (i.e., unique sequences) with the repeats.allowed argument.

 

### part a. ###
#should get 6^4 = 1296
dim(gtools::permutations(n = 6, r = 4, repeats.allowed = TRUE))[1]
## [1] 1296

 

### part b. ###
#should get factorial(6)/factorial(2) = 360
dim(gtools::permutations(n = 6, r = 4, repeats.allowed = FALSE))[1]
## [1] 360

 

## part c. ##
#find the unique passcodes
uniques = gtools::permutations(n = 6, r = 4, repeats.allowed = FALSE)

#create unique and ascending passcodes
uniques.ascending = apply(uniques, 1, function(x) sort(x))

#orient the matrix (the function t() just transposes a matrix; flips it on it's side)
uniques.ascending = t(uniques.ascending)

#create a data frame
uniques.ascending = data.frame(uniques.ascending)

#remove duplicates; should get choose(6, 4) = 15
dim(unique(uniques.ascending))[1]
## [1] 15

 

## part d. ##
#find all passcodess
passcodes = gtools::permutations(n = 6, r = 4, repeats.allowed = TRUE)

#create ascending passcodes
ascending = apply(passcodes, 1, function(x) sort(x))

#orient the matrix (the function t() just transposes a matrix; flips it on it's side)
ascending = t(ascending)

#create a data frame
ascending = data.frame(ascending)

#remove duplicates; should get choose(6 + 4 - 1, 4) = 126
dim(unique(ascending))[1]
## [1] 126

 

You can further solidify your understanding of these concepts with this problem that counts the number of possible pairings in  people. We also consider this problem in the practice section of this chapter.

 Click here to watch this video in your browser.
 

Story Proofs

This is the last bit of counting that we’ll cover in this chapter, and it’s sort of just like it sounds: proving something by means of a story. Basically, you explain something with words instead of with algebra.

Story proofs are really at the heart of this book; we hope to, above all, present an engaging narrative. While mathematical, rigorous proofs are important, being able to tell a story about a specific concept can be much more elegant and even promote a deeper understanding of the material. You can use story proofs for just about anything, and here we will demonstrate their use in a counting example.

Example 1.3 (Symmetry of the Binomial Coefficient):

Prove, with a story, that:

This is a relatively straightforward example, and also pretty useful in a pinch: it’s often referenced as the symmetry of the binomial coefficient. You may be able to immediately see that plugging in the algebraic definition of a binomial coefficient makes this proof trivial:

The above is clearly true; the only difference between the two sides is that the denominators are ordered differently, and we know that . However, we want to prove this relation by using a story instead of mathematical rigor.

Here, a good way to think about this equation is to consider the number of ways to pick a team of  people out of  total people. The left side of the equation, , gives the number of ways to pick the team of size  (we should be comfortable with this concept at this point). The right side, , gives the number of ways to pick the people not on the team (if  people are on the team, then  people are not on the team), which is the same thing as picking who is on the team. That is, since there are only two options, (1) on the team or (2) not on the team, picking who is on the team also picks who is off the team (the people who weren’t picked) and vice versa. So, the two sides are equal, as we saw with the algebraic proof above.

Let’s ground this result with an example. Imagine that there are 7 kids on the playground, but only 5 are going to be picked for the basketball team at recess. Picking the 5 that are on the team is the same as picking the 2 that are not on the team; if you pick 2 that will not be on the team, we know that the other 5 must be on the team! Think of it as splitting the population into two different groups; if we observe who is in one group, we know with a certainty who is in the other group.

We can further visualize this in R by considering a specific example where we generate committees of size 2 and size 1 from a population of 3 people.

 

#generate the committees (people labeled 1 to 5)
committees2 = gtools::combinations(n = 3, r = 2)
committees1 = gtools::combinations(n = 3, r = 1)

#print the sets of the two committees
committees2; committees1
##      [,1] [,2]
## [1,]    1    2
## [2,]    1    3
## [3,]    2    3
##      [,1]
## [1,]    1
## [2,]    2
## [3,]    3

Consider the output; the first 3×2 matrix represents all committees of size 2 (i.e., the first row means that we have a committee made up of person 1 and 2). The second 3×1 matrix is similar, but instead the rows are of length 1, since the committees are of size 1.

Now imagine comparing the two outputs. The first row of the committees2 matrix maps to the third row of the committees1 matrix; that is, picking the first and second person to be on the committee of size 2 is the same as picking the third person to NOT be on the committee. The matrices have the same number of rows, of course, because picking who is on the committee is equivalent to picking who is off the committee, by the story above!

 

Example 1.4:

Prove, with a story, that:

Again, this is not difficult to show with algebra. Plugging in the definition of the binomial coefficient:

However, again, we want to consider a story. The best way to envision this relationship is again with picking committees; this time, though, imagine that you are picking a committee of size  people out of a population of  people and you are selecting one president for the committee. So, there are two selections: picking the  people to be in the committee, and then picking the one person from the  people in the committee to be the president.

First, let’s consider the left side of the equation. Here, we have the basic , which of course gives the number of ways to select the committee of  people. However, once we’ve done this, we have another selection: picking the president of the committee. Since there are  people in the committee, we have  options to choose from, and thus by the multiplication rule we just multiply the first choice, picking the committee, or , by the second choice, or picking the president from the committee of  people. By this logic, the left hand side of the equation counts what we are interested in: the number of ways to pick a committee with a president.

We now have to prove that the right hand side of the equation gives the same thing. Let’s imagine that this time, instead of picking the committee first, we select the president and then the committee. So, if we are picking the president first from the population, we of course have  choices (we can pick anyone in the whole population). Then, after picking the president, we have to select the committee, and since there are  people left (we have already taken one out to be president) and we need  more people in the committee (we already have the president) we just multiply by . So, by the multiplication rule, we multiply the first choice of picking the president from  people by the second choice of picking the rest of the  sized committee from the  people.

So, then, the only difference between the two sides is that the left side picks the committee first and then the president from the committee, while the right side picks the president from the population and then selects the rest of the committee. Clearly, switching the order of these selections does not change the number of ways to choose a committee with a president, and our story holds: both sides of the equation are counting the same thing.

We can confirm our intuition in R by considering a committee of size 3 drawn from a population of size 4, with one president.

 

#label people 1 to 4
people = 1:4

#first, we do the president-first approach
#initialize the committees
committees1 = character(0)

#iterate through potential presidents
for(i in 1:length(people)){
  
  #generate the new committes, with i as the president
  new.committees = cbind(i, gtools::combinations(n = length(people) - 1, r = 2, v = people[-i]))
  
  #add on to existing committees
  committees1 = rbind(committees1, new.committees)
}

  
#remove column names
colnames(committees1) = c("", "", "")



#second, we do the committee-first approach; initialize here
committees2 = character(0)

#generate all of the committees, without presidents
new.committees = gtools::combinations(n = 4, r = 3)

#iterate through the committees and add presidents
for(i in 1:dim(new.committees)[1]){
  
  #pick each person as president
  for(j in 1:3){
    
    #add the committee
    committees2 = rbind(committees2, c(new.committees[i, j], new.committees[i, -j]))
  
  }
}

#remove column names
colnames(committees2) = c("", "", "")

#print the two groups of committees; should be the same size
#   the first person is president in each case
#we should have k*choose(n, k) = 3*choose(4, 3) = 12 total
committees1
##                  
##  [1,] "1" "2" "3"
##  [2,] "1" "2" "4"
##  [3,] "1" "3" "4"
##  [4,] "2" "1" "3"
##  [5,] "2" "1" "4"
##  [6,] "2" "3" "4"
##  [7,] "3" "1" "2"
##  [8,] "3" "1" "4"
##  [9,] "3" "2" "4"
## [10,] "4" "1" "2"
## [11,] "4" "1" "3"
## [12,] "4" "2" "3"

 

committees2
##                  
##  [1,] "1" "2" "3"
##  [2,] "2" "1" "3"
##  [3,] "3" "1" "2"
##  [4,] "1" "2" "4"
##  [5,] "2" "1" "4"
##  [6,] "4" "1" "2"
##  [7,] "1" "3" "4"
##  [8,] "3" "1" "4"
##  [9,] "4" "1" "3"
## [10,] "2" "3" "4"
## [11,] "3" "2" "4"
## [12,] "4" "2" "3"

 

Symmetry

 

This is not a topic exclusive to probability, Statistics in general, or even this chapter within this book; it is simply a tool that can help solve complicated problems. Symmetry, when used correctly, can be both very powerful and extremely elegant.

Nearly everyone is familiar with the colloquial concept of ‘symmetry,’ but it’s not always easy to apply symmetry to actual problems. In general, symmetry can greatly simplify a complex problem, reducing the structure to something much more manageable; in the context of this book, think of it as a problem solving tool (it is such a powerful problem solving tool that we gave it a section all to itself!). It’s probably best to discuss an example where symmetry can apply.

Example 1.5 (Spades)

Imagine a standard, well-shuffled, 52 card deck of cards. You are dealt four cards from the deck at random. On average, how many spades will you get in this four card deal?

 

Analytical Solution: At first glance, this seems like a difficult problem. We can be dealt anywhere from 0 to 4 spades, and there are many possible combinations for each (i.e., there are many ways to be dealt 0 spades). We might consider taking a ‘weighted average’ (we will formalize this concept in later chapters); that is, find the probability that we are dealt 0 spades, then 1 spade, etc., and then multiplying the probability by the number of spades found (i.e., the probability of finding 1 spade times 1, plus the probability of finding 2 spades times 2, etc.). This is certainly possible, but it seems like a lot of work.

Instead, let’s use symmetry. We are being dealt 4 of the 52 cards in the deck; we can think of there being 13 groups of 4 cards each in this deck (i.e., cards 1 through 4 are one group, then cards 5 through 8, etc.), since , and we simply own the first ‘group’ of 4 cards. We know that the total number of spades totaled across all of these groups is always 13; there are 13 spades in the deck, and these 13 groups account for the entire deck. By symmetry, we know that, on average, every group will get the same amount of spades. So, since there are 13 groups that represent 13 spades between them, and every group has the same average number of spades, each group must get 1 spade on average (it wouldn’t make sense if certain groups of 4 tended to get spades more often). So, in your four-card deal, you expect one spade.

Empirical Solution:

 

#replicate
set.seed(110)
sims = 1000

#create a deck; define spades as 1, everything else as 0
deck = c(rep(1, 13), rep(0, 52 - 13))

#keep track of the number of spades we sample
spades = rep(NA, sims)

#run the loop
for(i in 1:sims){
  
  
  #deal the four card hand
  hand = sample(deck, 4, replace = FALSE)
  
  #see how many spades we got
  spades[i] = sum(hand)
}

#should get 1
mean(spades)
## [1] 1.012

The key argument in the analytical solution above, of course, is the ‘symmetry’ argument. Consider what we argued: by symmetry, every ‘group’ of 4 cards should expect the same number of spades on average. Does this make sense? Well, since the deck is randomly shuffled, and the groups are arbitrary (i.e., we don’t deal in any sort of biased way to any of the groups), this must be true. That is, there’s no information in the problem that tells us that the groups should be at all different in terms of the number of spades that they are dealt on average.

It’s a strange argument, but tough to dispute and, as we saw, powerful. We were pondering performing a probability calculation instead of using this symmetry argument; this might have been feasible in this case, but what if we had a much larger deck and were dealt a much larger hand? Here, the elegant symmetry argument is much easier to implement. In general, it’s more generalizable and intuitive to understand.

Throughout this book, you will notice that symmetry arguments often provide extremely effective approaches to solving problems. Symmetry is a theme that will surface for nearly every topic we cover, which is why we include it in the first chapter. For a final example, consider another problem where symmetry can be used effectively:

 Click here to watch this video in your browser.
 

Practice

 

Problems

1.1

Consider a standard, well-shuffled deck of cards. What is the probability that the 4 Aces are all adjacent? Here, define ‘adjacent’ as taking four consecutive positions; i.e., the 1st, 2nd, 3rd and 4th cards in the deck. The definition is not circular: the first and last cards in the deck are not considered adjacent.

 

1.2
  1. There are eight schools in the Ivy League (Harvard’s athletic conference): Harvard, Yale, Princeton, Brown, Columbia, Dartmouth, Cornell and Penn. The academic rankings of these schools is often a topic of interest and controversy. How many different ways are there to rank these schools?
  2. Sometimes, Harvard, Yale and Princeton are referred to as the ‘Big 3,’ and are often grouped together as schools because of their extensive similarities. If indeed these three schools were identical – that is, the ordering ‘Harvard Yale Princeton’ is taken as identical to the ordering ‘Yale Princeton Harvard’ – how many ways would there now be to rank the Ivies?

 

1.3
  1. You are tasked with dividing 20 kids into two kickball teams at recess. Games have been very even in the past, so you decide to make things more interesting by giving 1 team 9 players and the other team 11 players. How many ways could you make these teams?
  2. Can you write your answer to part (a) in a different format?
  3. After the first game, the team with 9 people complained because of the inherent disadvantage. For the second game, you decide to again put 10 people on each team. How many ways can you do this?

 

1.4

For this problem, assume a normal, well-shuffled 52 card deck. You are dealt 5 cards (called a ‘hand’) randomly from the deck.

  1. The best hand in Poker is a royal flush: the 10, Jack, Queen, King and Ace of the same suit. What is the probability that you are dealt a royal flush?
  2. What is the probability that you are dealt a 3 of a kind (getting exactly 3 of the same value, like three jacks)?

 

1.5

Tony has 5 meetings to schedule this business week (Monday through Friday). Define a ‘permutation’ here as a schedule of meetings by day (i.e., two meetings on Monday, one meeting on Thursday and two meetings on Friday is one permutation). If meetings are indistinguishable, how many permutations are there if Tony does not want to have all five meetings on a single day?

 

1.6

Consider a state that has 5 different counties (a ‘county’ is a collection of towns: for example, the town Burlington, Connecticut is in Hartford County), and each county has 6 different towns. Imagine that you want to visit every town in the state; however, you don’t want to visit each county more than once (you can fly from any town to any other town in the state). Define a permutation as one specific ordering of the 30 visits you make (one to each town). How many permutations are there?

 

1.7

Imagine a game of tic-tac-toe where the players randomly select a blank space each turn to make their move. If  goes first, what is the probability that  wins on their third move?

 

1.8

Imagine a modified game of tic-tac-toe called ‘tic-tac-max.’ The only difference in tic-tac-max is that players put down ’s and ’s until the board is completely filled, not just until someone gets 3 in a row.  still goes first.

  1. Define a permutation as one game sequence; i.e., player 1 puts an  in the top left spot, then player 2 puts an  in the top right spot, etc. Permutations are considered distinct if they are completed in different orders. How many possible permutations are there to play this game?
  2. Define a ‘final board’ as the configuration of the board after each player has put all of their ‘pieces.’ How many possible ‘final boards’ are there?

 

1.9

(With help from Nicholas Larus-Stone and CJ Christian)

There are  people. How many ways are there to pair the people up?

For example, if we had four people named  and , one permutation would be the pairs  and . A second permutation would be the pairs  and  (one permutation is defined by pairing everyone).

 

1.10

Imagine the digits . How many ways are there to make a three-digit number and seven-digit number from these digits? For example,  and  as the three- and seven-digit numbers is one permutation; a second permutation is  and .

 

1.11

(With help from Juan Perdomo)

Consider these 5 points:

You can make plots like this in Latex here.

Imagine selecting two points at random and drawing a straight line in between the two points. Do this 5 times, with the constraint that you cannot select the same pair twice. What is the probability that the lines and points form a pentagon (i.e., a five-sided, five-angled, closed shape)?

 

1.12

There are  people, each with a left and a right foot. Each has 2 shoes (one for each foot), so there are  shoes. Define a ‘permutation’ as an allocation of the  shoes to the  people, such that the left shoes are on the left feet and the right shoes are on the right feet. If shoes are distinguishable, how many permutations are there?

 

1.13

Nick claims that, when , we have  because, by the definition of the binomial coefficient, the multiplication in these terms cancel out. Explain why Nick is wrong, using both math and intuition.

 

1.14

Consider 10 tosses of a fair coin. Let  be the event that you get exactly 5 heads, and  be the event that you get 10 heads.

  1. Compare  and .
  2. Now consider the sequences  and . Compare the probabilities that these sequences occur.
  3. Discuss your results in parts a. and b.

 

1.15

How many possible 5-letter words are there such that two letters within the word don’t repeat? That is, ‘alamo’ is ok, but ‘aalmo’ is not ok, since the letter ‘a’ repeats.

 

1.16

(Dedicated to Diana Stone, with help from Matt Goldberg)

There is a restaurant in France called ‘7367.’ After every meal, the waiter rolls four fair, 10-sided dice (numbered 1 through 10). If the dice show a 3, a 6 and two 7’s (in any order) the meal is free.

  1. If you eat at this restaurant once, what is the probability of winning a free meal?
  2. This dice game has made the restaurant very popular, but the manager of the restaurant wants to be able to adjust the game. He would like to present variations of the game for meals that are more expensive (lower probability of winning) and for meals that are less expensive (higher probability of winning).

Keeping the same structure of the dice game, how can you change the ‘winning number’ (i.e., 7367) to suit the manager’s needs, both for higher and lower probabilities of winning? Support your answer both with calculations and intuition.

 

1.17

Matt, Dan, Alec, Edward and Patrick are settling in for board game night. They pick their spots randomly at a round table with 5 evenly spaced seats. What is the probability that Dan is sitting next to Alec (i.e., he is sitting either directly to the left or right of Alec)?

 

1.18

Jon Snow and Robb Stark are both in need of knights. There are currently 100 available knights at Winterfell (Jon and Robb’s home); Jon is going to take 10 knights and go North, and Robb is going to take 30 knights and go South.

To avoid potentially displaying favoritism and upsetting the knights of the realm, Jon and Robb will both randomly select their knights. They will flip a coin to determine who goes first; the winner of the coin flip will randomly select his required number of knights (10 or 30) and the loser of the coin flip will then randomly select his required number of knights (10 or 30) from the remaining number of knights (90 or 70).

  1. Jon claims that there are  possible combinations (one single combination marks the knights that Jon took, the knights that Robb took and the knights that stayed back). Robb, though, claims that Jon is off by a factor of 2, because he forgot to account for who picks their knights first (Jon or Robb). Who is correct?
  2. Robb and Jon flip the coin, and Robb wins (the coin lands on the “direwolf” sigil, not the “crow” sigil). However, Robb is not pleased: “Great,” he says, “now I go first, which means I have a higher chance of taking Theon” (Theon is one of the 100 knights and is well known for his ineptitude). Robb and Jon will still both randomly select their knights, as the rules stipulate; does Robb have a higher chance of selecting Theon if he goes first?

 

1.19

Imagine a game called ‘Cards Roulette.’ The four Aces are removed from the deck and are well shuffled. You take turns with a friend drawing Aces without replacement from the four cards. The first player to draw a red Ace (Hearts or Diamonds) loses.

  1. If you would like to maximize your probability of winning this game, should you go first or second?
  2. Compare this to the Russian Roulette problem from this chapter (click here for a video recap of this problem). Think about how these two problems compare in structure and make an intuitive argument comparing the two solutions.

 

1.20

Using a story proof, show:

Where .

Hint: During police interrogations, it is common to adapt a ‘good cop/bad cop’ strategy. That is, two cops enter the interrogation room, and one cop is the ‘good cop’ (they are nice to the perpetrator to try to get them to open up) while the other cop is the ‘bad cop’ (they are rude and possibly even threatening to scare the perpetrator).

Imagine that you are the police commissioner and you are presented with pairs of cops; the cops come in teams of two. It is your job to select teams, and then assign which cops will be the ‘good cops’ and which cops will be the ‘bad cops’ (each team must have one good cop and one bad cop).

 

1.21

Define ‘skip-counting’ as an alternative way to count integers. Imagine counting from 1 to 10: a legal ‘skip-count’ is an ascending set of integers that starts with 1 and ends with 10 (i.e., 1-2-5-6-10 and 1-3-10 are both legal ‘skip-counts’).

How many skip-counts are there if we skip-count from 1 to some integer ?

 

1.22

Masterminded by Alexander Lee.

The NFL (National Football League) consists of 32 games that play 17 games amongst each other in a season (each team is given one ‘bye’ week that they have off, but for simplicitly in this problem we will assume all teams play all 17 weeks). By extension, there are 16 games each week (each of the 32 teams plays one other team).

Imagine entering an NFL betting pool with the following rules: you can purchase a single entry for $1, which allows you to pick a game a week until you are wrong. That is, every week, you select a team out of the 32 that you think will win, and if you are correct (the team wins or ties) you advance to the second week. You win if you ‘survive’ for 17 weeks (pick 17 games, one in each week, correctly).

You are allowed to purchase multiple entries; simply imagine that different entries are different chances to play the game. Each entry is independent (you can pick different games, the same games, etc. with different entries) and when an entry fails (you pick a wrong game for that entry) the entry is eliminated, but other entries may continue. If any one of your entries makes it 17 weeks, you win.

What is the least amount of money you have to pay to buy enough entries that allow you to implement a strategy that guarantees victory (at least one entry survives)?

 

BH Problems

 

The problems in this section are taken from Blitzstein and Hwang (2014). The questions are reproduced here, and the analytical solutions are freely available online. Here, we will only consider empirical solutions: answers/approximations to these problems using simulations in R.

 

BH 1.8
  1. How many ways are there to split a dozen people into 3 teams, where one team has 2 people, and the other two teams have 5 people each?
  2. How many ways are there to split a dozen people into 3 teams, where each team has 4 people?

 

BH 1.9
  1. How many paths are there from the point  to the point  in the plane such that each step either consists of going one unit up or one unit to the right?
  2. How many paths are there from  to , where each step consists of going one unit up or one unit to the right, and the path has to go through ?

 

BH 1.16

Show that for all positive integers  and  with ,doing this in two ways: (a) algebraically and (b) with a story, giving an interpretation for why both sides count the same thing.

 

BH 1.18
  1. Show using a story proof thatwhere  and  are positive integers with . This is called the hockey stick identity.
  2. Suppose that a large pack of Haribo gummi bears can have anywhere between 30 and 50 gummi bears. There are 5 delicious flavors: pineapple (clear), raspberry (red), orange (orange), strawberry (green, mysteriously), and lemon (yellow). There are 0 non-delicious flavors. How many possibilities are there for the composition of such a pack of gummi bears? You can leave your answer in trms of a couple binomial coefficients, but not a sum of lots of binomial coefficients.

 

BH 1.22

A certain family has 6 children, consisting of 3 boys and 3 girls. Assuming that all birth orders are equally likely, what is the probability that the 3 eldest children are the three girls?

 

BH 1.23

A city with 6 districts has 6 robberies in a particular week. Assume the robberies are located randomly, with all possibilities for which robbery occurred where equally likely. What is the probability that some district had more than 1 robbery?

 

BH 1.26

A college has 10 (non-overlapping) time slots for its courses, and blithely assigns courses to time slots randomly and independently. A student randomly chooses 3 of the courses to enroll in. What is the probability that there is a conflict in the student’s schedule?

 

BH 1.27

For each part, decide whether the blank should be filled in with =, < or > and give a clear explanation.

  1. (probability that the total after rolling 4 fair dice is 21) vs. (probability that the total after rolling 4 fair dice is 22)
  2. (probability that a random 2-letter word is a palindrome) vs. (probability that a random 3-letter word is a palindrome)

 

BH 1.29

Elk dwell in a certain forest. There are  elk, of which a simple random sample of size  are captured and tagged (“simple random sample” means that all  sets of  elk are equally likely). The captured elk are returned to the population, and then a new sample is drawn, this time with size . This is an important method that is widely used in ecology, known as capture-recapture. What is the probability that exactly  of the  elk in the new sample were previously tagged? (Assume that an elk that was captured before doesn’t become more or less likely to be captured again.)

 

BH 1.31

A jar contains  red balls and  green balls, where  and  are fixed positive integers. A ball is drawn from the jar randomly (with all possibilities equally likely), and then a second ball is drawn randomly.

  1. Explain intuitively why the probability of the second ball being green is the same as the probability of the first ball being green.
  2. Define notation for the sample space of the problem, and use this to compute the probabilities from (a) and show that they are the same.
  3. Suppose that there are 16 balls in total, and that the probability that the two balls are the same color is the same as the probability that they are different colors. What are  and  (list all possibilities)?

 

BH 1.32

A random 5-card poker hand is dealt from a standard deck of cards. Find the probability of each of the following possibilities (in terms of binomial coefficients).

  1. A flush (all 5 cards being of the same suit; do not count a royal flush, which is a flush with an ace, king, queen, jack, and 10).
  2. Two pair (e.g., two 3’s, two 7’s, and an ace).

 

BH 1.40

norepeatword is a sequence of at least one (and possibly all) of the usual 26 letters a,b,c,…,z, with repetitions not allowed. For example, “course” is a norepeatword, but “statistics” is not. Order matters, e.g., “course” is not the same as “source.”

A norepeatword is chosen randomly, with all norepeatwords equally likely. Show that the probability that it uses all 26 letters is very close to .

 

BH 1.48

A card player is dealt a 13-card hand from a well-shuffled, standard deck of cards. What is the probability that the hand is void in at least one suit (“void in a suit” means having no cards of that suit)?

 

BH 1.52

Alice attends a small college in which each class meets only once a week. She is deciding between 30 non-overlapping classes. There are 6 classes to choose from for each day of the week, Monday through Friday. Trusting in the benevolence of randomness, Alice decides to register for 7 randomly selected classes out of the 30, with all choices equally likely. What is the probability that she will have classes every day, Monday through Friday? (This problem can be done either directly using the naive definition of probability, or using inclusion-exclusion.)

 

BH 1.59

There are 100 passengers lined up to board an airplane with 100 seats (with each seat assigned to one of the passengers). The first passenger in line crazily decides to sit in a randomly chosen seat (with all seats equally likely). Each subsequent passenger takes his or her assigned seat if available, and otherwise sits in a random available seat. What is the probability that the last passenger in line gets to sit in his or her assigned seat? (This is a common interview problem, and a beautiful example of the power of symmetry.)

Similar Posts

  • R

    Functions 함수는 R을 훨씬 더 쉽게 그리고 생산적으로 사용할 수 있게 해준다. 많은 함수들이 기본 R 패키지(base R package)에 이미 정의되어 있으며(예: sqrt, exp), 추가로 다운로드할 수 있는 패키지들에도 수많은 함수가 포함되어 있다. 그러나 매우 특정한 필요를 해결하기 위해서는 직접 함수를 작성해야 하는 경우가 자주 있다. 다른 언어에서 함수를 작성해본 경험이 있다면, R에서 함수를 작성하는…

Leave a Reply

Your email address will not be published. Required fields are marked *