The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
By using the Taylor series expansions of and and flipping the order of a double sum, I was able to show that
.
I immediately got to thinking: there’s nothing particularly special about and for this analysis. Is there a way of generalizing this result to all functions with a Taylor series expansion?
Suppose
,
and let’s use the same technique to evaluate
.
To see why this matches our above results, let’s start with and write out the full Taylor series expansion, including zero coefficients:
,
so that
or
After dropping the zero terms and collecting, we obtain
.
A similar calculation would apply to any even function .
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
In the previous two posts, I showed that
;
the technique that I used was using the Taylor series expansions of and to write and as double sums and then interchanging the order of summation.
In the post, I share an alternate way of solving for and . I wish I could take credit for this, but I first learned the idea from my daughter. If we differentiate , we obtain
.
Something similar happens when differentiating the series for ; however, it’s not quite so simple because of the term. I begin by separating the term from the sum, so that a sum from to remains:
.
I then differentiate as before:
.
At this point, we reindex the sum. We make the replacement , so that and varies from to . After the replacement, we then change the dummy index from back to .
With a slight alteration to the term, this sum is exactly the definition of :
.
Summarizing, we have shown that and . Differentiating a second time, we obtain
or
.
This last equation is a second-order nonhomogeneous linear differential equation with constant coefficients. A particular solution, using the method of undetermined coefficients, must have the form . Substituting, we see that
We see that and which then lead to the particular solution
Since and are solutions of the associated homogeneous equation , we conclude that
,
where the values of and depend on the initial conditions on . As it turns out, it is straightforward to compute and , so we will choose for the initial conditions. We observe that and are both clearly equal to 0, so that as well.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
In the previous post, we showed that by writing the series as a double sum and then reversing the order of summation. We proceed with very similar logic to evaluate . Since
is the Taylor series expansion of , we may write as
As before, we employ one of my favorite techniques from the bag of tricks: reversing the order of summation. Also as before, the inner sum is inner sum is independent of , and so the inner sum is simply equal to the summand times the number of terms. We see that
.
At this point, the solution for diverges from the previous solution for . I want to cancel the factor of in the summand; however, the denominator is
,
and doesn’t cancel cleanly with . Hypothetically, I could cancel as follows:
,
but that introduces an extra in the denominator that I’d rather avoid.
So, instead, I’ll write as and then distribute and split into two different sums:
.
At this point, I factored out a power of from the first sum. In this way, the two sums are the Taylor series expansions of and :
.
This was sufficiently complicated that I was unable to guess this solution by experimenting with Mathematica; nevertheless, Mathematica can give graphical confirmation of the solution since the graphs of the two expressions overlap perfectly.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
We start with and the Taylor series
.
With this, can be written as
.
At this point, my immediate thought was one of my favorite techniques from the bag of tricks: reversing the order of summation. (Two or three chapters of my Ph.D. theses derived from knowing when to apply this technique.) We see that
.
At this point, the inner sum is independent of , and so the inner sum is simply equal to the summand times the number of terms. Since there are terms for the inner sum (), we see
.
To simplify, we multiply top and bottom by 2 so that the first term of cancels:
At this point, I factored out a and a power of to make the sum match the Taylor series for :
.
I was unsurprised but comforted that this matched the guess I had made by experimenting with Mathematica.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
When I first read this problem, I immediately noticed that
is a Taylor polynomial of and
is a Taylor polynomial of . In other words, the given expressions are the sums of the tail-sums of the Taylor series for and .
As usual when stumped, I used technology to guide me. Here’s the graph of the first sum, adding the first 50 terms.
I immediately notice that the function oscillates, which makes me suspect that the answer involves either or . I also notice that the sizes of oscillations increase as increases, so that the answer should have the form or , where is an increasing function. I also notice that the graph is symmetric about the origin, so that the function is even. I also notice that the graph passes through the origin.
So, taking all of that in, one of my first guesses was , which is satisfies all of the above criteria.
That’s not it, but it’s not far off. The oscillations of my guess in orange are too big and they’re inverted from the actual graph in blue. After some guessing, I eventually landed on .
That was a very good sign… the two graphs were pretty much on top of each other. That’s not a proof that is the answer, of course, but it’s certainly a good indicator.
I didn’t have the same luck with the other sum; I could graph it but wasn’t able to just guess what the curve could be.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Let be arbitrary events in a probability field. Denote by the event that at least of occur. Prove that .
I’ll admit when I first read this problem, I didn’t believe it. I had to draw a couple of Venn diagrams to convince myself that it actually worked:
Of course, pictures are not proofs, so I started giving the problem more thought.
I wish I could say where I got the inspiration from, but I got the idea to define a new random variable to be the number of events from that occur. With this definition, becomes the event that , so that
At this point, my Spidey Sense went off: that’s the tail-sum formula for expectation! Since is a non-negative integer-valued random variable, the mean of can be computed by
.
Said another way, .
Therefore, to solve the problem, it remains to show that is also equal to . To do this, I employed the standard technique from the bag of tricks of writing as the sum of indicator random variables. Define
Then , so that
.
Equating the two expressions for , we conclude that , as claimed.
The following problem appeared in Volume 53, Issue 4 (2022) of The College Mathematics Journal.
Define, for every non-negative integer , the th Catalan number by
.
Consider the sequence of complex polynomials in defined by for every non-negative integer , where . It is clear that has degree and thus has the representation
,
where each is a positive integer. Prove that for .
This problem appeared in the same issue as the probability problem considered in the previous two posts. Looking back, I think that the confidence that I gained by solving that problem gave me the persistence to solve this problem as well.
My first thought when reading this problem was something like “This involves sums, polynomials, and binomial coefficients. And since the sequence is recursively defined, it’s probably going to involve a proof by mathematical induction. I can do this.”
My second thought was to use Mathematica to develop my own intuition and to confirm that the claimed pattern actually worked for the first few values of .
As claimed in the statement of the problem, each is a polynomial of degree without a nontrivial constant term. Also, for each , the term of degree , for , has a coefficient that is independent of which equal to . For example, for , the coefficient of (in orange above) is equal to
,
and the problem claims that the coefficient of will remain 14 for
Confident that the pattern actually worked, all that remained was pushing through the proof by induction.
We proceed by induction on . The statement clearly holds for :
.
Although not necessary, I’ll add for good measure that
and
This next calculation illustrates what’s coming later. In the previous calculation, the coefficient of is found by multiplying out
.
This is accomplished by examining all pairs, one from the left product and one from the right product, so that the exponent works out to be . In this case, it’s
.
For the inductive step, we assume that, for some , for all , and we define
Our goal is to show that for .
For , the coefficient of in is clearly 1, or .
For , the coefficient of in can be found by expanding the above square. Every product of the form will contribute to the term . Since (since ), the values of that will contribute to this term will be . (Ordinarily, the and terms would also contribute; however, there is no term in the expression being squared). Therefore, after using the induction hypothesis and reindexing, we find
.
The last step used a recursive relationship for the Catalan numbers that I vaguely recalled but absolutely had to look up to complete the proof.
The following problem appeared in Volume 53, Issue 4 (2022) of The College Mathematics Journal. This was the second-half of a two-part problem.
Suppose that and are independent, uniform random variables over . Define , , , and as follows:
is uniform over ,
is uniform over ,
with and , and
.
Prove that is uniform over .
Once again, one way of showing that is uniform on is showing that if .
My first thought was that the value of depends on the value of , and so it makes sense to write as an integral of conditional probabilities:
,
where is the probability density function of . In this case, since has a uniform distribution over , we see that for . Therefore,
.
My second thought was that really has a two-part definition:
So it made sense to divide the conditional probability into these two cases:
My third thought was that these probabilities can be rewritten using the Multiplication Rule. This ordinarily has the form . For an initial conditional probability, it has the form . Therefore,
.
The definition of provides the immediate computation of and :
Also, the two-part definition of provides the next step:
We split each of these integrals into an integral from to and then an integral from to . First,
.
We now use the following: if and is uniform over , then
We observe that in the first integral, while in the second integral. Therefore,
.
For the second integral involving , we again split into two subintegrals and use the fact that if is uniform on , then
Therefore,
.
Combining, we conclude that
,
from which we conclude that is uniformly distributed on .
As I recall, this took a couple days of staring and false starts before I was finally able to get the solution.
The following problem appeared in Volume 53, Issue 4 (2022) of The College Mathematics Journal. This was the first problem that was I able to solve in over 30 years of subscribing to MAA journals.
Suppose that and are independent, uniform random variables over . Now define the random variable by
.
Prove that is uniform over . Here, is the indicator function that is equal to 1 if is true and 0 otherwise.
The first thing that went through my mind was something like, “This looks odd. But it’s a probability problem using concepts from a senior-level but undergraduate probability course. This was once my field of specialization. I had better be able to get this.”
My second thought was that one way of proving that is uniform on is showing that if .
My third thought was that really had a two-part definition:
So I got started by dividing this probability into the two cases:
.
In the last step, since , the events and are redundant: if , then will automatically be less than . Therefore, it’s safe to remove from the last probability.
Ordinarily, such probabilities are computed by double integrals over the joint probability density function of and , which usually isn’t easy. However, in this case, since and are independent and uniform over , the ordered pair is uniform on the unit square . Therefore, probabilities can be found by simply computing areas.
In this case, since the area of the unit square is 1, is equal to the sum of the areas of
,
which is depicted in green below, and
,
which is depicted in purple.
First, the area in green is a trapezoid. The intercept of the line is , and the two lengths of and on the upper left of the square are found from this intercept. The area of the green trapezoid is easiest found by subtracting the areas of two isosceles right triangles:
Second, the area in purple is an isosceles right triangle. The intercept of the line is , so that the distance from the intercept to the origin is . From this, the two lengths of and are found. Therefore, the area of the purple right triangle is .
Adding, we conclude that
.
Therefore, is uniform over .
A closing note: after going 0-for-4000 in my previous 30+ years of attempting problems submitted to MAA journals, I was unbelievably excited to finally get one. As I recall, it took me less than an hour to get the above solution, although writing up the solution cleanly took longer.
However, the above was only Part 1 of a two-part problem, so I knew I just had to get the second part before submitting. That’ll be the subject of the next post.
I first became a member of the Mathematical Association of America in 1988. My mentor in high school gave me a one-year membership as a high school graduation present, and I’ve maintained my membership ever since. Most years, I’ve been a subscriber to three journals: The American Mathematical Monthly, Mathematics Magazine, and College Mathematics Journal.
A feature for each of these journals is the Problems/Solutions section. In a nutshell, readers devise and submit original problems in mathematics for other readers to solve; the editors usually allow readers to submit solutions for a few months after the problems are first published. Between the three journals, something like 120 problems are submitted annually by readers.
And, historically, I had absolutely no success in solving these problems. Said another way: over my first 30+ years as an MAA member, I went something like 0-for-4000 at solving these submitted problems. This gnawed at me for years, especially when I read the solutions offered by other readers, maybe a year after the problem originally appeared, and thought to myself, “Why didn’t I think of that?”
Well, to be perfectly honest, that’s still my usual. However, in the past couple of years, I actually managed to solve a handful of problems that appeared in Mathematics Magazine and College Mathematics Journal, to my great surprise and delight. I don’t know what happened. Maybe I’ve just got better at problem solving. Maybe solving the first one or two boosted my confidence. Maybe success breeds success. Maybe all the hard problems have already been printed and the journals’ editors have nothing left to publish except relatively easier problems.
In this short series, I’ll try to reconstruct my thought processes and flashes of inspiration that led to these solutions.