The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
By using the Taylor series expansions of and and flipping the order of a double sum, I was able to show that
.
I immediately got to thinking: there’s nothing particularly special about and for this analysis. Is there a way of generalizing this result to all functions with a Taylor series expansion?
Suppose
,
and let’s use the same technique to evaluate
.
To see why this matches our above results, let’s start with and write out the full Taylor series expansion, including zero coefficients:
,
so that
or
After dropping the zero terms and collecting, we obtain
.
A similar calculation would apply to any even function .
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
In the previous two posts, I showed that
;
the technique that I used was using the Taylor series expansions of and to write and as double sums and then interchanging the order of summation.
In the post, I share an alternate way of solving for and . I wish I could take credit for this, but I first learned the idea from my daughter. If we differentiate , we obtain
.
Something similar happens when differentiating the series for ; however, it’s not quite so simple because of the term. I begin by separating the term from the sum, so that a sum from to remains:
.
I then differentiate as before:
.
At this point, we reindex the sum. We make the replacement , so that and varies from to . After the replacement, we then change the dummy index from back to .
With a slight alteration to the term, this sum is exactly the definition of :
.
Summarizing, we have shown that and . Differentiating a second time, we obtain
or
.
This last equation is a second-order nonhomogeneous linear differential equation with constant coefficients. A particular solution, using the method of undetermined coefficients, must have the form . Substituting, we see that
We see that and which then lead to the particular solution
Since and are solutions of the associated homogeneous equation , we conclude that
,
where the values of and depend on the initial conditions on . As it turns out, it is straightforward to compute and , so we will choose for the initial conditions. We observe that and are both clearly equal to 0, so that as well.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
In the previous post, we showed that by writing the series as a double sum and then reversing the order of summation. We proceed with very similar logic to evaluate . Since
is the Taylor series expansion of , we may write as
As before, we employ one of my favorite techniques from the bag of tricks: reversing the order of summation. Also as before, the inner sum is inner sum is independent of , and so the inner sum is simply equal to the summand times the number of terms. We see that
.
At this point, the solution for diverges from the previous solution for . I want to cancel the factor of in the summand; however, the denominator is
,
and doesn’t cancel cleanly with . Hypothetically, I could cancel as follows:
,
but that introduces an extra in the denominator that I’d rather avoid.
So, instead, I’ll write as and then distribute and split into two different sums:
.
At this point, I factored out a power of from the first sum. In this way, the two sums are the Taylor series expansions of and :
.
This was sufficiently complicated that I was unable to guess this solution by experimenting with Mathematica; nevertheless, Mathematica can give graphical confirmation of the solution since the graphs of the two expressions overlap perfectly.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
We start with and the Taylor series
.
With this, can be written as
.
At this point, my immediate thought was one of my favorite techniques from the bag of tricks: reversing the order of summation. (Two or three chapters of my Ph.D. theses derived from knowing when to apply this technique.) We see that
.
At this point, the inner sum is independent of , and so the inner sum is simply equal to the summand times the number of terms. Since there are terms for the inner sum (), we see
.
To simplify, we multiply top and bottom by 2 so that the first term of cancels:
At this point, I factored out a and a power of to make the sum match the Taylor series for :
.
I was unsurprised but comforted that this matched the guess I had made by experimenting with Mathematica.
The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.
Evaluate the following sums in closed form:
and
.
When I first read this problem, I immediately noticed that
is a Taylor polynomial of and
is a Taylor polynomial of . In other words, the given expressions are the sums of the tail-sums of the Taylor series for and .
As usual when stumped, I used technology to guide me. Here’s the graph of the first sum, adding the first 50 terms.
I immediately notice that the function oscillates, which makes me suspect that the answer involves either or . I also notice that the sizes of oscillations increase as increases, so that the answer should have the form or , where is an increasing function. I also notice that the graph is symmetric about the origin, so that the function is even. I also notice that the graph passes through the origin.
So, taking all of that in, one of my first guesses was , which is satisfies all of the above criteria.
That’s not it, but it’s not far off. The oscillations of my guess in orange are too big and they’re inverted from the actual graph in blue. After some guessing, I eventually landed on .
That was a very good sign… the two graphs were pretty much on top of each other. That’s not a proof that is the answer, of course, but it’s certainly a good indicator.
I didn’t have the same luck with the other sum; I could graph it but wasn’t able to just guess what the curve could be.
At long last, we have reached the end of this series of posts.
The derivation is elementary; I’m confident that I could have understood this derivation had I seen it when I was in high school. That said, the word “elementary” in mathematics can be a bit loaded — this means that it is based on simple ideas that are perhaps used in a profound and surprising way. Perhaps my favorite quote along these lines was this understated gem from the book Three Pearls of Number Theory after the conclusion of a very complicated proof in Chapter 1:
You see how complicated an entirely elementary construction can sometimes be. And yet this is not an extreme case; in the next chapter you will encounter just as elementary a construction which is considerably more complicated.
Here are the elementary ideas from calculus, precalculus, and high school physics that were used in this series:
Physics
Conservation of angular momentum
Newton’s Second Law
Newton’s Law of Gravitation
Precalculus
Completing the square
Quadratic formula
Factoring polynomials
Complex roots of polynomials
Bounds on and
Period of and
Zeroes of and
Trigonometric identities (Pythagorean, sum and difference, double-angle)
Conic sections
Graphing in polar coordinates
Two-dimensional vectors
Dot products of two-dimensional vectors (especially perpendicular vectors)
Euler’s equation
Calculus
The Chain Rule
Derivatives of and
Linearizations of , , and near (or, more generally, their Taylor series approximations)
Derivative of
Solving initial-value problems
Integration by substitution
While these ideas from calculus are elementary, they were certainly used in clever and unusual ways throughout the derivation.
I should add that although the derivation was elementary, certain parts of the derivation could be made easier by appealing to standard concepts from differential equations.
One more thought. While this series of post was inspired by a calculation that appeared in an undergraduate physics textbook, I had thought that this series might be worthy of publication in a mathematical journal as an historical example of an important problem that can be solved by elementary tools. Unfortunately for me, Hieu D. Nguyen’s terrific article Rearing Its Ugly Head: The Cosmological Constant and Newton’s Greatest Blunder in The American Mathematical Monthly is already in the record.
In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.
In this series, we found an approximate solution to the governing initial-value problem
,
where , , , is the gravitational constant of the universe, is the mass of the planet, is the mass of the Sun, is the constant angular momentum of the planet, is the eccentricity of the orbit, and is the speed of light.
We used the following steps to find an approximate solution.
Step 0. Ignore the general-relativity contribution and solve the simpler initial-value problem
,
which is a zeroth-order approximation to the real initial-value problem. We found that the solution of this differential equation is
,
which is the equation of an ellipse in polar coordinates.
Step 1. Solve the initial-value problem
,
which partially incorporates the term due to general relativity. This is a first-order approximation to the real differential equation. After much effort, we found that the solution of this initial-value problem is
.
For large values of , this is accurately approximated as:
,
which can be further approximated as
.
From this expression, the precession in a planet’s orbit due to general relativity can be calculated.
Roughly 20 years ago, I presented this application of differential equations at the annual meeting of the Texas Section of the Mathematical Association of America. After the talk, a member of the audience asked what would happen if we did this procedure yet again to find a second-order approximation. In other words, I was asked to consider…
Step 2. Solve the initial-value problem
.
It stands to reason that the answer should be an even more accurate approximation to the true solution .
I didn’t have an immediate answer for this question, but I can answer it now. Letting Mathematica do the work, here’s the answer:
Yes, it’s a mess. The term in red is , while the term in yellow is the next largest term in . Both of these appear in the answer to .
The term in green is the next largest term in , with the highest power of in the numerator and the highest power of in the denominator. In other words,
.
How does this compare to our previous approximation of
?
Well, to a second-order Taylor approximation, it’s the same! Let
.
Expanding about and treated as a constant, we find
.
Substituting yields the above approximation for .
Said another way, proceeding to a second-order approximation merely provides additional confirmation for the precession of a planet’s orbit.
Just for the fun of it, I also used Mathematica to find the solution of Step 3:
Step 2. Solve the initial-value problem
.
I won’t copy-and-paste the solution from Mathematica; unsurpisingly, it’s really long. I will say that, unsurprisingly, the leading terms are
.
I said “unsurprisingly” because this matches the third-order Taylor polynomial of our precession expression. I don’t have time to attempt it, but surely there’s a theorem to be proven here based on this computational evidence.
In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.
We have shown that under general relativity, the motion of a planet around the Sun precesses by
,
where is the semi-major axis of the planet’s orbit, is the orbit’s eccentricity, is the gravitational constant of the universe, is the mass of the Sun, and is the speed of light.
Notice that for to be as observable as possible, we’d like to be as small as possible and to be as large as possible. By a fortunate coincidence, the orbit of Mercury — the closest planet to the sun — has the most elliptical orbit of the eight planets.
Here are the values of the constants for Mercury’s orbit in the SI system:
The last constant, , is the time for Mercury to complete one orbit. This isn’t in the SI system, but using Earth years as the unit of time will prove useful later in this calculation.
Using these numbers, and recalling that , we find that
.
Notice that all of the units cancel out perfectly; this bit of dimensional analysis is a useful check against careless mistakes.
Again, the units of are in radians per Mercury orbit, or radians per 0.2408 years. We now convert this to arc seconds per century:
.
This indeed matches the observed precession in Mercury’s orbit, thus confirming Einstein’s theory of relativity.
This same computation can be made for other planets. For Venus, we have the new values of , , and . Repeating this calculation, we predict the precession in Venus’s orbit to be 8.65” per century. Einstein made this prediction in 1915, when the telescopes of the time were not good enough to measure the precession in Venus’s orbit. This only happened in 1960, 45 years later and 5 years after Einstein died. Not surprisingly, the precession in Venus’s orbit also agrees with general relativity.
In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.
We have shown that the motion of a planet around the Sun, expressed in polar coordinates with the Sun at the origin, under general relativity is
,
where , , is the semi-major axis of the planet’s orbit, is the orbit’s eccentricity, , is the gravitational constant of the universe, is the mass of the planet, is the mass of the Sun, is the planet’s perihelion, is the constant angular momentum of the planet, and is the speed of light.
The above function is maximized (i.e., the distance from the Sun is minimized) when is as large as possible. This occurs when is a multiple of .
Said another way, the planet is at its closest point to the Sun when . One orbit later, the planet returns to its closest point to the Sun when
We now use the approximation
;
this can be demonstrated by linearization, Taylor series, or using the first two terms of the geometric series . With this approximation, the closest approach to the Sun in the next orbit occurs when
,
which is coterminal with the angle
.
Substituting and , we see that the amount of precession per orbit is
.
The units of are radians per orbit. In the next post, we will use Mercury’s data to find in seconds of arc per century.
In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.
We have shown that the motion of a planet around the Sun, expressed in polar coordinates with the Sun at the origin, under general relativity is
,
where , , , , is the gravitational constant of the universe, is the mass of the planet, is the mass of the Sun, is the planet’s perihelion, is the constant angular momentum of the planet, and is the speed of light.
We notice that the orbit of a planet under general relativity looks very, very similar to the orbit under Newtonian physics:
,
so that
.
As we’ve seen, this describes an elliptical orbit, normally expressed in rectangular coordinates as
,
with semimajor axis along the axis. In particular, for an elliptical orbit, the planet’s closest approach to the Sun occurs at :
,
and the planet’s further distance from the Sun occurs at :
.
Therefore, the length of the major axis of the ellipse is the sum of these two distances:
.
Said another way, . This is a far more convenient formula for computing than , as the values of (the semi-major axis) and (the eccentricity of the orbit) are more accessible than the angular momentum of the planet’s orbit.
In the next post, we finally compute the precession of the orbit.