The following problem appeared in Volume 131, Issue 9 (2024) of The American Mathematical Monthly.
Let
and
be independent normally distributed random variables, each with its own mean and variance. Show that the variance of
conditioned on the event
is smaller than the variance of
alone.
We suppose that ,
,
, and
. With these definitions, we may write
and
, where
and
are independent standard normal random variables.
The goal is to show that . In previous posts, we showed that it will be sufficient to show that
, where
and
. We also showed that
, where
and
is the cumulative distribution function of the standard normal distribution.
To compute
,
we showed in the two previous posts that
and
.
Therefore,
.
To show that , it suffices to show that the second term must be positive. Furthermore, since the denominator of the second term is positive, it suffices to show that
must also be positive.
And, to be honest, I was stuck here for the longest time.
At some point, I decided to plot this function in Mathematica to see if I get some ideas flowing:

The function certainly looks like it’s always positive. What’s more, the graph suggests attempting to prove a couple of things: is an increasing function, and
. If I could prove both of these claims, then that would prove that
must always be positive.
Spoiler alert: this was almost a dead-end approach to the problem. I managed to prove one of them, but not the other. (I don’t doubt it’s true, but I didn’t find a proof.) I’ll discuss in the next post.