My Favorite One-Liners: Part 84

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Every once in a while, I’ll show my students that there’s a difficult way to do a problem that I don’t want them to do for homework. For example, here’s the direct derivation of the mean of the binomial distribution using only Precalculus; this would make an excellent homework problem for the Precalculus teacher who wants to torture his/her students:

E(X) = \displaystyle \sum_{k=0}^n k {n \choose k} p^k q^{n-k}

= \displaystyle \sum_{k=1}^n k  {n \choose k} p^k q^{n-k}

= \displaystyle \sum_{k=1}^n k  \frac{n!}{k!(n-k)!} p^k q^{n-k}

= \displaystyle \sum_{k=1}^n \frac{n!}{(k-1)!(n-k)!} p^k q^{n-k}

= \displaystyle \sum_{k=1}^n \frac{n (n-1)!}{(k-1)!(n-k)!} p^k q^{n-k}

= \displaystyle \sum_{i=0}^{n-1} \frac{n (n-1)!}{i!(n-1-i)!} p^{i+1} q^{n-1-i}

= \displaystyle np \sum_{i=0}^{n-1} \frac{(n-1)!}{i!(n-1-i)!} p^i q^{n-1-i}

= \displaystyle np(p+q)^{n-1}

= np \cdot 1^{n-1}

=np.

However, that’s a lot of work, and the way that I really want my students to do this, which is a lot easier (and which will be used throughout the semester), is by writing the binomial random variable as the sum of indicator random variables:

E(X) = E(I_1 + \dots + I_n) = E(I_1) + \dots + E(I_n) = p + \dots + p = np.

So, to reassure my students that they’re going to be asked to reproduce the above lengthy calculation, I’ll tell them that I wrote all that down for my own machismo, just to prove to them that I really could do it.

Since my physical presence exudes next to no machismo, this almost always gets a laugh.

My Favorite One-Liners: Part 79

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

I’ll use today’s quip when there are multiple reasonable ways of solving a problem. For example,

Two fair dice are rolled. Find the probability that at least one of the rolls is a six.

This can be done by directly listing all of the possibilities:

11 \qquad 12 \qquad 13 \qquad 14 \qquad 15 \qquad 16

21 \qquad 22 \qquad 23 \qquad 24 \qquad 25 \qquad 26

31 \qquad 32 \qquad 33 \qquad 34 \qquad 35 \qquad 36

41 \qquad 42 \qquad 43 \qquad 44 \qquad 45 \qquad 46

51 \qquad 52 \qquad 53 \qquad 54 \qquad 55 \qquad 56

61 \qquad 62 \qquad 63 \qquad 64 \qquad 65 \qquad 66

Of these 36 possibilities, 11 have at least one six, so the answer is 11/36.

Alternatively, we could use the addition rule:

P(\hbox{first a six or second a six}) = P(\hbox{first a six}) + P(\hbox{second a six}) - P(\hbox{first a six and second a six})

= P(\hbox{first a six}) + P(\hbox{second a six}) - P(\hbox{first a six}) P(\hbox{second a six})

= \displaystyle \frac{1}{6} + \frac{1}{6} - \frac{1}{6} \times \frac{1}{6}

= \displaystyle \frac{11}{36}.

Another possibility is using the complement:

P(\hbox{at least one six}) = 1 - P(\hbox{no sixes})

= 1 - P(\hbox{first is not a six})P(\hbox{second is not a six})

= 1 - \displaystyle \frac{5}{6} \times \frac{5}{6}

= \displaystyle \frac{11}{36}

To emphasize that there are multiple ways of solving the problem, I’ll use this one-liner:

There are plenty of ways to skin a cat… for those of you who like skinning cats.

When I was a boy, I remember seeing some juvenile book of jokes titled “1001 Ways To Skin a Cat.” A recent search for this book on Amazon came up empty, but I did find this:

My Favorite One-Liners: Part 62

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

This is a story that I’ll tell after doing a couple of back-to-back central limit theorem problems. Here’s the first:

The chances of winning a column bet in roulette is 12/38. The bet pays 2 to 1, meaning that if you lose, you lose $1. However, if you win, you get your $1 back and $2 more. If this bet is made 1000 times, what is the probability of winning at least $0?

With my class, we solve this problem using standard techniques with the normal approximation:

\mu = E(X) = 2 \times \displaystyle \frac{12}{38} + (-1) \frac{26}{38} = - \displaystyle \frac{1}{19}

E(X^2) = 2^2 \times \displaystyle \frac{12}{38} + (-1)^2 \frac{26}{38} =  \displaystyle \frac{37}{19}

\sigma = SD(X) = \sqrt{ \displaystyle \frac{37}{19} - \left( - \displaystyle \frac{1}{19} \right)^2} = \displaystyle \frac{\sqrt{702}}{19}

E(T_0) = n\mu = 1000 \left( -\displaystyle \frac{1}{19} \right) \approx -52.63

\hbox{SD}(T_0) = \sigma \sqrt{n} = \displaystyle \frac{\sqrt{702}}{19} \sqrt{1000} \approx 44.10

P(T_0 > 0) \approx P\left(Z > \displaystyle \frac{0-(-52.63)}{44.10} \right) \approx P(Z > 1.193) \approx 0.1163.

Next, I’ll repeat the problem, except playing the game 10,000 times.

The chances of winning a column bet in roulette is 12/38. The bet pays 2 to 1, meaning that if you lose, you lose $1. However, if you win, you get your $1 back and $2 more. If this bet is made 10,000 times, what is the probability of winning at least $0?

The last three lines of the above calculation have to be changed:

E(T_0) = n\mu = 10,000 \left( -\displaystyle \frac{1}{19} \right) \approx -526.32

\hbox{SD}(T_0) = \sigma \sqrt{n} = \displaystyle \frac{\sqrt{702}}{19} \sqrt{10,000} \approx 139.45

P(T_0 > 0) \approx P\left(Z > \displaystyle \frac{0-(-526.32)}{139.45} \right) \approx P(Z > 3.774) \approx 0.00008.

In other words, the chance of winning drops dramatically. This is an example of the Law of Large Numbers: if you do something often enough, then what ought to happen eventually does happen.

As a corollary, if you’re going to bet at roulette, you should only bet a few times. And, I’ll tell my students, one Englishman took this to the (somewhat) logical extreme by going to Las Vegas and making the ultimate double-or-nothing bet, betting his entire life savings on one bet. After all, his odds of coming out ahead by making one bet were a whole lot higher than by making a sequence of bets.

Naturally, my students ask, “Did he win?” Here’s the video and the Wikipedia page:

My Favorite One-Liners: Part 61

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

This is a story that I like to tell my probability and statistics students when we cover the law of averages.

One of my favorite sports is golf, and one spring afternoon in my senior year I went out to play a round. I was assigned a tee time with two other students (that I didn’t know), and off we went.

Unfortunately, the group in front of us were, as I like to say, getting their money’s worth out of the round. Somebody would be stuck in a sand trap and then blast the ball into the sand trap on the other side of the green. Then he’d go to blast the ball out of that sand trap, and the ball would go back to the original one.

Golf etiquette dictates that slow-playing groups should let faster groups play through. However, this group never offered to let us pass them. And so, hole after hole, we would wait and wait and wait.

On hole #9, a player walking by himself came up from behind us. I’m not sure how that happened — perhaps the foursome that had been immediately behind us was even slower than the foursome in front of us — and he courteously asked if he could play through. I told him that we’d be happy to let him play through, but that the group in front of us hadn’t let us through, and so we were all stuck.

As a compromise, he asked if he could join our group. Naturally, we agreed.

This solo golfer did not introduce himself, but I recognized him because his picture had been in the student newspaper a few weeks earlier. He was Notah Begay III, then a hot-shot freshman on the Stanford men’s golf team. Though I didn’t know it then, he would later become a three-time All-American and, with Tiger Woods as a teammate, would win the NCAA championship. As a professional, he would win on the PGA Tour four times and was a member of the 2000 President’s Cup team.

Of course, all that lay in the future. At the time, all I knew was that I was about to play with someone who was really, really good.

We ended up playing five holes together… numbers 10 through 14. After playing 14, it started to get dark and I decided to call it quits (as the 14th green was fairly close to the course’s entrance).

So Notah tees off on #10. BOOM! I had never been so close to anyone who hit a golf ball so far. The guys I was paired with started talking about which body parts they would willingly sever if only they could hit a tee shot like that.

And I thought to myself, Game on.

I quietly kept score of how I did versus how Notah did. And for five holes, I shot 1-over par, while he shot 2-over par. And for five holes, I beat a guy who would eventually earn over $5 million on the PGA Tour.

green lineHow did the 9-handicap amateur beat the future professional? Simple: we only played five holes.

Back then, if I shot 1-over par over a stretch of five holes, I would be pretty pleased with my play, but it wouldn’t be as if I had never done it before. And I’m sure Notah was annoyed that he was 2-over par for those five holes (he chili-dipped a couple of chip shots; I imagine that he was experimenting with a new chipping technique), but even the best golfers in the world will occasionally have a five-hole stretch where they go 2-over par or more.

Of course, a golf course doesn’t have just five holes; it has 18.

My all-time best score for a round of golf was a four-over par 76.; I can count on one hand the number of times that I’ve broken 80. That would be a lousy score for a Division I golfer. So, to beat Notah for a complete round of golf, it would take one of my absolute best days happening simultaneously with one of his worst.

Furthermore, a stroke-play golf tournament is not typically decided in only one round of golf. A typical professional golf tournament, for those who make the cut, lasts four rounds. So, to beat Notah at a real golf tournament, I would have to have my absolute best day four days in a row at the same time that Notah had four of worst days.

That’s simply not going to happen.

So I share this anecdote with my students to illustrate the law of averages. (I also use a spreadsheet simulating flipping a coin thousands of times to make the same point.) If you do something enough times, what ought to happen does happen. However, if instead you do something only a few times, then unexpected results can happen. A 9-handicap golfer can beat a much better player if they only play 5 holes.

To give a more conventional illustration, a gambler can make a few dozen bets at a casino and still come out ahead. However, if the gambler stays at the casino long enough, he is guaranteed to lose money.

My Favorite One-Liners: Part 43

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. q Q

Years ago, my first class of students decided to call me “Dr. Q” instead of “Dr. Quintanilla,” and the name has stuck ever since. And I’ll occasionally use this to my advantage when choosing names of variables. For example, here’s a typical proof by induction involving divisibility.

Theorem: If n \ge 1 is a positive integer, then 5^n - 1 is a multiple of 4.

Proof. By induction on n.

n = 1: 5^1 - 1 = 4, which is clearly a multiple of 4.

n: Assume that 5^n - 1 is a multiple of 4.

At this point in the calculation, I ask how I can write this statement as an equation. Eventually, somebody will volunteer that if 5^n-1 is a multiple of 4, then 5^n-1 is equal to 4 times something. At which point, I’ll volunteer:

Yes, so let’s name that something with a variable. Naturally, we should choose something important, something regal, something majestic… so let’s choose the letter q. (Groans and laughter.) It’s good to be the king.

So the proof continues:

n: Assume that 5^n - 1 = 4q, where q is an integer.

n+1. We wish to show that 5^{n+1} - 1 is also a multiple of 4.

At this point, I’ll ask my class how we should write this. Naturally, I give them no choice in the matter:

We wish to show that 5^{n+1} - 1 = 4Q, where Q is some (possibly different) integer.

Then we continue the proof:

5^{n+1} - 1 = 5^n 5^1 - 1

= 5 \times 5^n - 1

= 5 \times (4q + 1) - 1 by the induction hypothesis

= 20q + 5 - 1

= 20q + 4

= 4(5q + 1).

So if we let Q = 5q +1, then 5^{n+1} - 1 = 4Q, where Q is an integer because q is also an integer.

QED

green line

On the flip side of braggadocio, the formula for the binomial distribution is

P(X = k) = \displaystyle {n \choose k} p^k q^{n-k},

where X is the number of successes in n independent and identically distributed trials, where p represents the probability of success on any one trial, and (to my shame) q is the probability of failure.

 

 

My Favorite One-Liners: Part 40

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

In some classes, the Greek letter \phi or \Phi naturally appears. Sometimes, it’s an angle in a triangle or a displacement when graphing a sinusoidal function. Other times, it represents the cumulative distribution function of a standard normal distribution.

Which begs the question, how should a student pronounce this symbol?

I tell my students that this is the Greek letter “phi,” pronounced “fee”. However, other mathematicians may pronounce it as “fie,” rhyming with “high”. Continuing,

Other mathematicians pronounce it as “foe.” Others, as “fum.”

My Favorite One-Liners: Part 33

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Perhaps one of the more difficult things that I try to instill in my students is numeracy, or a sense of feeling if an answer to a calculation is plausible. As a initial step toward this goal, I’ll try to teach my students some basic pointers about whether an answer is even possible.

For example, when calculating a standard deviation, students have to compute E(X) and E(X^2):

E(X) = \sum x p(x) \qquad \hbox{or} \qquad E(X) = \int_a^b x f(x) \, dx

E(X^2) = \sum x^2 p(x) \qquad \hbox{or} \qquad E(X^2) = \int_a^b x^2 f(x) \, dx

After these are computed — which could take some time — the variance is then calculated:

\hbox{Var}(X) = E(X^2) - [E(X)]^2.

Finally, the standard deviation is found by taking the square root of the variance.

So, I’ll ask my students, what do you do if you calculate the variance and it’s negative, so that it’s impossible to take the square root? After a minute to students hemming and hawing, I’ll tell them emphatically what they should do:

It’s wrong… do it again.

The same principle applies when computing probabilities, which always have to be between 0 and 1. So, if ever a student computes a probability that’s either negative or else greater than 1, they can be assured that the answer is wrong and that there’s a mistake someplace in their computation that needs to be found.

My Favorite One-Liners: Part 31

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s the closing example that I’ll use when presenting the binomial and hypergeometric distributions to my probability/statistics students.

A lonely bachelor decides to play the field, deciding that a lifetime of watching “Leave It To Beaver” reruns doesn’t sound all that pleasant. On 250 consecutive days, he calls a different woman for a date. Unfortunately, through the school of hard knocks, he knows that the probability that a given woman will accept his gracious invitation is only 1%. What is the chance that he will land at least three dates?

You can probably imagine the stretch I was enduring when I first developed this example many years ago. Nevertheless, I make a point to add the following disclaimer before we start finding the solution, which always gets a laugh:

The events of this exercise are purely fictitious. Any resemblance to any actual persons — living, or dead, or currently speaking — is purely coincidental.

My Favorite One-Liners: Part 30

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is a follow-up to yesterday’s post and is one that I’ll use when I need my students to remember something that I taught them earlier in the semester — perhaps even the previous day.

For example, in my applied statistics class, one day I’ll show students how to compute the expected value and the standard deviation of a random variable:

E(X) = \sum x \cdot P(X=x)

E(X^2) = \sum x^2 \cdot P(X=x)

\hbox{SD}(X) = \sqrt{ E(X^2) - [E(X)]^2 }

Then, the next time I meet them, I start working on a seemingly new topic, the derivation of the binomial distribution:

P(X = k) = \displaystyle {n \choose k} p^k q^{n-k}.

This derivation takes some time because I want my students to understand not only how to use the formula but also where the formula comes from. Eventually, I’ll work out that if n = 3 and p = 0.2,

P(X = 0) = 0.512

P(X = 1) = 0.384

P(X = 2) = 0.096

P(X = 3) = 0.008

Then, I announce to my class, I next want to compute E(X) and \hbox{SD}(X). We had just done this the previous class period; however, I know full well that they haven’t yet committed those formulas to memory. So here’s the one-liner that I use: “If you had a good professor, you’d remember how to do this.”

Eventually, when the awkward silence has lasted long enough because no one can remember the formula (without looking back at the previous day’s notes), I plunge an imaginary knife into my heart and turn the imaginary dagger, getting the point across: You really need to remember this stuff.

My Favorite One-Liners: Part 29

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when I need my students to remember something from a previous course — especially when it’s a difficult concept from a previous course — that somebody else taught them in a previous semester.

For example, in my probability class, I’ll introduce the Poisson distribution

P(X = k) = e^{-\mu} \displaystyle \frac{\mu^k}{k!},

where \mu > 0 and the permissible values of k are non-negative integers.

In particular, since these are probabilities and one and only one of these values can be taken, this means that

\displaystyle \sum_{k=0}^\infty e^{-\mu} \frac{\mu^k}{k!} = 1.

At this point, I want students to remember that they’ve actually seen this before, so I replace \mu by x and then multiply both sides by e^x:

\displaystyle \sum_{k=0}^\infty \frac{x^k}{k!} = e^x.

Of course, this is the Taylor series expansion for e^x. However, my experience is that most students have decidedly mixed feelings about Taylor series; often, it’s the last thing that they learn in Calculus II, which means it’s the first thing that they forget when the semester is over. Also, most students have a really hard time with Taylor series when they first learn about them.

So here’s my one-liner that I’ll say at this point: “Does this bring back any bad memories for anyone? Perhaps like an old Spice Girls song?” And this never fails to get an understanding laugh before I remind them about Taylor series.