My Favorite One-Liners: Part 31

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s the closing example that I’ll use when presenting the binomial and hypergeometric distributions to my probability/statistics students.

A lonely bachelor decides to play the field, deciding that a lifetime of watching “Leave It To Beaver” reruns doesn’t sound all that pleasant. On 250 consecutive days, he calls a different woman for a date. Unfortunately, through the school of hard knocks, he knows that the probability that a given woman will accept his gracious invitation is only 1%. What is the chance that he will land at least three dates?

You can probably imagine the stretch I was enduring when I first developed this example many years ago. Nevertheless, I make a point to add the following disclaimer before we start finding the solution, which always gets a laugh:

The events of this exercise are purely fictitious. Any resemblance to any actual persons — living, or dead, or currently speaking — is purely coincidental.

My Favorite One-Liners: Part 30

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is a follow-up to yesterday’s post and is one that I’ll use when I need my students to remember something that I taught them earlier in the semester — perhaps even the previous day.

For example, in my applied statistics class, one day I’ll show students how to compute the expected value and the standard deviation of a random variable:

E(X) = \sum x \cdot P(X=x)

E(X^2) = \sum x^2 \cdot P(X=x)

\hbox{SD}(X) = \sqrt{ E(X^2) - [E(X)]^2 }

Then, the next time I meet them, I start working on a seemingly new topic, the derivation of the binomial distribution:

P(X = k) = \displaystyle {n \choose k} p^k q^{n-k}.

This derivation takes some time because I want my students to understand not only how to use the formula but also where the formula comes from. Eventually, I’ll work out that if n = 3 and p = 0.2,

P(X = 0) = 0.512

P(X = 1) = 0.384

P(X = 2) = 0.096

P(X = 3) = 0.008

Then, I announce to my class, I next want to compute E(X) and \hbox{SD}(X). We had just done this the previous class period; however, I know full well that they haven’t yet committed those formulas to memory. So here’s the one-liner that I use: “If you had a good professor, you’d remember how to do this.”

Eventually, when the awkward silence has lasted long enough because no one can remember the formula (without looking back at the previous day’s notes), I plunge an imaginary knife into my heart and turn the imaginary dagger, getting the point across: You really need to remember this stuff.

My Favorite One-Liners: Part 29

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when I need my students to remember something from a previous course — especially when it’s a difficult concept from a previous course — that somebody else taught them in a previous semester.

For example, in my probability class, I’ll introduce the Poisson distribution

P(X = k) = e^{-\mu} \displaystyle \frac{\mu^k}{k!},

where \mu > 0 and the permissible values of k are non-negative integers.

In particular, since these are probabilities and one and only one of these values can be taken, this means that

\displaystyle \sum_{k=0}^\infty e^{-\mu} \frac{\mu^k}{k!} = 1.

At this point, I want students to remember that they’ve actually seen this before, so I replace \mu by x and then multiply both sides by e^x:

\displaystyle \sum_{k=0}^\infty \frac{x^k}{k!} = e^x.

Of course, this is the Taylor series expansion for e^x. However, my experience is that most students have decidedly mixed feelings about Taylor series; often, it’s the last thing that they learn in Calculus II, which means it’s the first thing that they forget when the semester is over. Also, most students have a really hard time with Taylor series when they first learn about them.

So here’s my one-liner that I’ll say at this point: “Does this bring back any bad memories for anyone? Perhaps like an old Spice Girls song?” And this never fails to get an understanding laugh before I remind them about Taylor series.

 

My Favorite One-Liners: Part 28

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when simple techniques get used in a complicated way.

Consider the solution of the linear recurrence relation

Q_n = Q_{n-1} + 2 Q_{n-2},

where F_0 = 1 and F_1 = 1. With no modesty, I call this one the Quintanilla sequence when I teach my students — the forgotten little brother of the Fibonacci sequence.

To find the solution of this linear recurrence relation, the standard technique — which is a pretty long procedure — is to first solve the characteristic equation, from Q_n - Q_{n-1} - 2 Q_{n-2} = 0, we obtain the characteristic equation

r^2 - r - 2 = 0

This can be solved by any standard technique at a student’s disposal. If necessary, the quadratic equation can be used. However, for this one, the left-hand side simply factors:

(r-2)(r+1) = 0

r=2 \qquad \hbox{or} \qquad r = -1

(Indeed, I “developed” the Quintanilla equation on purpose, for pedagogical reasons, because its characteristic equation has two fairly simple roots — unlike the characteristic equation for the Fibonacci sequence.)

From these two roots, we can write down the general solution for the linear recurrence relation:

Q_n = \alpha_1 \times 2^n + \alpha_2 \times (-1)^n,

where \alpha_1 and \alpha_2 are constants to be determined. To find these constants, we plug in n =0:

Q_0 = \alpha_1 \times 2^0 + \alpha_2 \times (-1)^0.

To find these constants, we plug in n =0:

Q_0 = \alpha_1 \times 2^0 + \alpha_2 \times (-1)^0.

We then plug in n =1:

Q_1 = \alpha_1 \times 2^1 + \alpha_2 \times (-1)^1.

Using the initial conditions gives

1 = \alpha_1 + \alpha_2

1 = 2 \alpha_1 - \alpha_2

This is a system of two equations in two unknowns, which can then be solved using any standard technique at the student’s disposal. Students should quickly find that \alpha_1 = 2/3 and \alpha_2 = 1/3, so that

Q_n = \displaystyle \frac{2}{3} \times 2^n + \frac{1}{3} \times (-1)^n = \frac{2^{n+1} + (-1)^n}{3},

which is the final answer.

Although this is a long procedure, the key steps are actually first taught in Algebra I: solving a quadratic equation and solving a system of two linear equations in two unknowns. So here’s my one-liner to describe this procedure:

This is just an algebra problem on steroids.

Yes, it’s only high school algebra, but used in a creative way that isn’t ordinarily taught when students first learn algebra.

I’ll use this “on steroids” line in any class when a simple technique is used in an unusual — and usually laborious — way to solve a new problem at the post-secondary level.

 

 

My Favorite One-Liners: Part 27

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s an anecdote that I’ll share when teaching students about factorials:

1! = 1

2! = 1 \times 2 = 2

3! = 1 \times 2 \times 3 = 6

4! = 1 \times 2 \times 3 \times 4 = 24

5! = 1 \times 2 \times 3 \times 4 \times 5 = 120

The obvious observation is that the factorials get big very, very quickly.

Here’s my anecdote:

Many years ago, I was writing lesson plans while the TV show “Wheel of Fortune” was on in the background. And the contestant solved the puzzle at the end, and Pat Sajak declared, “You have just won $40,320 in cash in prizes.

So I immediately thought to myself, “Ah, 8 factorial.”

Then I thought, ugh [while slapping myself in the forehead, grimacing, and shaking my head, pretending that I can’t believe that that was the first thought that immediately came to mind].

[Finishing the story:] Not surprisingly, I was still single when this happened.

My Favorite One-Liners: Part 26

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s a problem that could appear early in a probability class:

Let P(A) = 0.2, P(B) = 0.4, and P(A \cup B) = 0.5. Find P(A \mid B).

The standard technique for solving this problem involves first finding P(A \cap B) using the Addition Rule:

P(A \cup B) = P(A) + P(B) - P(A \cap B)

0.5 = 0.2 + 0.4 - P(A \cap B)

P(A \cap B) = 0.1

From here, the Multiplication Rule can be used (or, equivalently, the definition of a conditional probability):

P(B \cap A) = P(B) \cdot P(A \mid B)

0.1 = 0.4 P(A \mid B)

0.25 = P(A \mid B)

So far, so good.

Now let me add a small twist to the original problem that creates a small difficulty when solving:

Let P(A) = 0.2, P(B) = 0.4, and P(A \cup B) = 0.5. Find P(A \cap B \mid A \cup B).

Proceeding as before, we obtain

P( [A \cap B] \cup [A \cup B] ) = P(A \cup B) \cdot P(A \cap B \mid A \cup B)

The value of $P(A \cup B)$ is obvious. But how do we evaluate the left side?

If I’m teaching an advanced probability class, I might expect them to use DeMorgan’s Laws. However, it’s a whole lot easier to reason out the left hand side: I’m looking for the probability that both A and B happen or else at least one of A and B happen. Well, that’s clearly redundant: if both A and B happen, then certainly at least one of A and B happen.

Here’s my one-liner, which I say, if possible, using only one breath of air:

Clearly, this is redundant. It’s like saying Dr. Q is my professor and he’s a total stud. It’s redundant. It’s obvious. There’s no need to actually say it.

After the laughter settles from this bit of braggadocio, the A \cup B can be safely dropped from the left side:

P( A \cap B) = P(A \cup B) \cdot P(A \cap B \mid A \cup B)

0.1 = 0.5 \cdot P(A \cap B \mid A \cup B)

0.2 = P(A \cap B \mid A \cup B)

However, I need to emphasize that dropping the term on the left side is a special feature of this particular problem since one set was a subset of the other, and that students shouldn’t expect to always be able to do this when computing conditional probabilities.

My Favorite One-Liners: Part 25

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Consider the integral

\displaystyle \int_0^2 2x(1-x^2)^3 \, dx

The standard technique — other than multiplying it out — is using the substitution u = 1-x^2. With this substitution du = -2x \, dx. Also, x = 0 corresponds to u = 1, while x = 2 corresponds to u = -3. Therefore,

\displaystyle\int_0^2 2x(1-x^2)^3 \, dx = - \displaystyle\int_0^2 (-2x)(1-x^2)^3 \, dx = -\displaystyle\int_1^{-3} u^3 \, du.

My one-liner at this point is telling my students, “At this point, about 10,000 volts of electricity should be going down your spine.” I’ll use this line when a very unexpected result happens — like a “left” endpoint that’s greater than the “right” endpoint. Naturally, for this problem, the next step — though not logically necessary, it’s psychologically reassuring — is to absorb the negative sign by flipping the endpoints:

\displaystyle\int_0^2 2x(1-x^2)^3 \, dx =  -\displaystyle\int_1^{-3} u^3 \, du = \displaystyle\int_{-3}^1 u^3 \, du,

and then the calculation can continue.

My Favorite One-Liners: Part 24

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s a problem that could appear in my class in probability or statistics:

Let f(x) = 3x^2 be a probability density function for 0 \le x \le 1. Find F(x) = P(X \le x), the cumulative distribution function of X.

A student’s first reaction might be to set up the integral as

\displaystyle \int_0^x 3x^2 \, dx

The problem with this set-up, of course, is that the letter x has already been reserved as the right endpoint for this definite integral. Therefore, inside the integral, we should choose any other letter — just not x — as the dummy variable.

Which sets up my one-liner: “In the words of the great philosopher Jean-Luc Picard: Plenty of letters left in the alphabet.”

We then write the integral as something like

\displaystyle \int_0^x 3t^2 \, dt

and then get on with the business of finding F(x).

My Favorite One-Liners: Part 23

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here are some sage words of wisdom that I give in my statistics class:

If the alternative hypothesis has the form p > p_0, then the rejection region lies to the right of p_0. On the other hand, if the alternative hypothesis has the form p < p_0, then the rejection region lies to the left of p_0.

On the other hand, if the alternative hypothesis has the form p \ne p_0, then the rejection region has two parts: one part to the left of p_0, and another part to the right. So it’s kind of like my single days. Back then, my rejection region had two parts: Friday night and Saturday night.

My Favorite One-Liners: Part 22

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s example might be the most cringe-worthy pun that I use in any class that I teach.

In my statistics classes, I try to emphasize to student that a high value of the correlation coefficient r is not the same thing as causation. To hopefully drive home this point, I’ll use the following picture.

piracy01

Conclusion: If we want to stop global warming, we should all become pirates.

Obviously, I tell my class, there isn’t a cause-and-effect relationship here, even though there is a strong positive correlation. So, I tell my class, in my best pirate voice, “Correlation is not the same thing as a causation, even if you get a large value of ARRRRRRR.”

Without fail, my students love this awful wisecrack.

While I’m on the topic, this is too good not to share:

For further reading, see my series on correlation and causation.