Happy Fourth of July.
I’m a sucker for G-rated ways of using humor to engage students with concepts in the mathematical curriculum. I never thought that Saturday Night Live would provide a wonderful source of material for this effort.
Happy Fourth of July.
I’m a sucker for G-rated ways of using humor to engage students with concepts in the mathematical curriculum. I never thought that Saturday Night Live would provide a wonderful source of material for this effort.
I had forgotten the precise assumptions on uniform convergence that guarantees that an infinite series can be differentiated term by term, so that one can safely conclude
.
This was part of my studies in real analysis as a student, so I remembered there was a theorem but I had forgotten the details.
So, like just about everyone else on the planet, I went to Google to refresh my memory even though I knew that searching for mathematical results on Google can be iffy at best.
And I was not disappointed. Behold this laughably horrible false analogy (and even worse graphic) that I found on chegg.com:
Suppose Arti has to plan a birthday party and has lots of work to do like arranging stuff for decorations, planning venue for the party, arranging catering for the party, etc. All these tasks can not be done in one go and so need to be planned. Once the order of the tasks is decided, they are executed step by step so that all the arrangements are made in time and the party is a success.
Similarly, in Mathematics when a long expression needs to be differentiated or integrated, the calculation becomes cumbersome if the expression is considered as a whole but if it is broken down into small expressions, both differentiation and the integration become easy.
Pedagogically, I’m all for using whatever technique an instructor might deem necessary to to “sell” abstract mathematical concepts to students. Nevertheless, I’m pretty sure that this particular party-planning analogy has no potency for students who have progressed far enough to rigorously study infinite series.
Let
be the set of all times, and let
measure how good day
is. Translate the logical statement
where time
is today.
This matches the chorus of “Best Days of Your Life” by Kellie Pickler, co-written by and featuring Taylor Swift.
Context: Part of a discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
Let
be the set of all times, and let
be the statement “I got by on my own at time
.” Translate the logical statement
where time
is today.
This matches the opening line of the fabulous power ballad “Alone” by Heart.
And while I’ve got this song in mind, here’s the breakout performance by a young unknown Carrie Underwood on American Idol.
Context: Part of a discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.

The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
In previous posts, we reduced the problem to showing that if , then
is always positive, where
is the cumulative distribution function of the standard normal distribution. If we can prove this, then the original problem will be true.
Motivated by the graph of , I thought of a two-step method for showing
must be positive: show that
is an increasing function, and show that
. If I could prove both of these claims, then that would prove that
must always be positive.
I was able to show the second step by demonstrating that, if ,
.
As discussed in the last post, the limit follows from this equality. However, I just couldn’t figure out the first step.
So I kept trying.
And trying.
And trying.
Until it finally hit me: I’m working too hard! The goal is to show that is positive. Clearly, clearly, the right-hand side of the last equation is positive! So that’s the entire proof for
… there was no need to prove that
is increasing!
For , it’s even easier. If
is non-negative, then
.
So, in either case, must be positive. Following the logical thread in the previous posts, this demonstrates that
, so that
, thus concluding the solution.
And I was really annoyed at myself that I stumbled over the last step for so long, when the solution was literally right in front of me.
The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
In previous posts, we reduced the problem to showing that if , then
is always positive, where
is the cumulative distribution function of the standard normal distribution. If we can prove this, then the original problem will be true.
When I was solving this problem for the first time, my progress through the first few steps was hindered by algebra mistakes and the like, but I didn’t doubt that I was progressing toward the answer. At this point in the solution, however, I was genuinely stuck: nothing immediately popped to mind for showing that must be greater than
.
So I turned to Mathematica, just to make sure I was on the right track. Based on the graph, the function of certainly looks positive.
What’s more, the graph suggests attempting to prove a couple of things: is an increasing function, and
or, equivalently,
. If I could prove both of these claims, then that would prove that
must always be positive.
I started by trying to show
.
I vaguely remembered something about the asymptotic expansion of the above integral from a course decades ago, and so I consulted that course’s textbook, by Bender and Orszag, to refresh my memory. To derive the behavior of as
, we integrate by parts. (This is permissible: the integrands below are well-behaved if
, so that
is not in the range of integration.)
.
This is agonizingly close: the leading term is as expected. However, I was stuck for the longest time trying to show that the second term goes to zero as
.
So, once again, I consulted Bender and Orszag, which outlined how to show this. We note that
.
Therefore,
,
so that
.
Therefore,
,
or
.
So (I thought) I was halfway home with the solution, and all that remained was to show that was an increasing function.
And I was completely stuck at this point for a long time.
Until I realized — much to my utter embarrassment — that showing was increasing was completely unnecessary, as discussed in the next post.
The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
We suppose that ,
,
, and
. With these definitions, we may write
and
, where
and
are independent standard normal random variables.
The goal is to show that . In previous posts, we showed that it will be sufficient to show that
, where
and
. We also showed that
, where
and
is the cumulative distribution function of the standard normal distribution.
To compute
,
we showed in the two previous posts that
and
.
Therefore,
.
To show that , it suffices to show that the second term must be positive. Furthermore, since the denominator of the second term is positive, it suffices to show that
must also be positive.
And, to be honest, I was stuck here for the longest time.
At some point, I decided to plot this function in Mathematica to see if I get some ideas flowing:

The function certainly looks like it’s always positive. What’s more, the graph suggests attempting to prove a couple of things: is an increasing function, and
. If I could prove both of these claims, then that would prove that
must always be positive.
Spoiler alert: this was almost a dead-end approach to the problem. I managed to prove one of them, but not the other. (I don’t doubt it’s true, but I didn’t find a proof.) I’ll discuss in the next post.
The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
We suppose that ,
,
, and
. With these definitions, we may write
and
, where
and
are independent standard normal random variables.
The goal is to show that . In previous posts, we showed that it will be sufficient to show that
, where
and
. We also showed that
, where
and
is the cumulative distribution function of the standard normal distribution.
To compute
,
we showed in the previous post that
.
We now turn to the second conditional expectation:
.
The expected value in the numerator is a double integral:
,
where is the joint probability density function of
and
. Since
and
are independent,
is the product of the individual probability density functions:
.
Therefore, we must compute
,
where I wrote for the event
.
I’m not above admitting that I first stuck this into Mathematica to make sure that this was doable. To begin, we compute the inner integral:
we begin by using integration by parts on the inner integral:
Therefore,
.
The second term is equal to since the double integral is
. For the first integral, we complete the square as before:
.
I now rewrite the integrand so that has the form of the probability density function of a normal distribution, writing and multiplying and dividing by
in the denominator:
.
This is an example of making a problem easier by apparently making it harder. The integrand has the probability density function of a normally distributed random variable with
and
. Therefore, the integral is equal to
, so that
,
.
Therefore,
.
We note that this reduces to what we found in the second special case: if , then
and
, so that
, matching what we found earlier.
In the next post, we consider the calculation of .
The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
We suppose that ,
,
, and
. With these definitions, we may write
and
, where
and
are independent standard normal random variables.
The goal is to show that . In the previous two posts, we showed that it will be sufficient to show that
, where
and
. We also showed that
, where
and
is the cumulative distribution function of the standard normal distribution.
To compute
,
we begin with
.
The expected value in the numerator is a double integral:
,
where is the joint probability density function of
and
. Since
and
are independent,
is the product of the individual probability density functions:
.
Therefore, we must compute
,
where I wrote for the event
.
I’m not above admitting that I first stuck this into Mathematica to make sure that this was doable. To begin, we compute the inner integral:
.
At this point, I used a standard technique/trick of completing the square to rewrite the integrand as a common pdf.
.
I now rewrite the integrand so that has the form of the probability density function of a normal distribution, writing and multiplying and dividing by
in the denominator:
.
This is an example of making a problem easier by apparently making it harder. The integrand is equal to , where
is a normally distributed random variable with
and
. Since
, we have
,
and so
.
We note that this reduces to what we found in the second special case: if ,
, and
, then
,
, and
. Since
, we have
,
matching what we found earlier.
In the next post, we consider the calculation of .