Solving Problems Submitted to MAA Journals (Part 6e)

The following problem appeared in Volume 97, Issue 3 (2024) of Mathematics Magazine.

Two points P and Q are chosen at random (uniformly) from the interior of a unit circle. What is the probability that the circle whose diameter is segment overline{PQ} lies entirely in the interior of the unit circle?

Let D_r be the interior of the circle centered at the origin O with radius r. Also, let C(P,Q) denote the circle with diameter \overline{PQ}, and let R = OP be the distance of P from the origin.

In the previous post, we showed that

\hbox{Pr}(C(P,Q) \subset D_1 \mid R = r) = \sqrt{1-r^2}.

To find \hbox{Pr}(C(P,Q) \subset D_1), I will integrate over this conditional probability:

\hbox{Pr}(C(P,Q) \subset D_1) = \displaystyle \int_0^1 \hbox{Pr}(C(P,Q) \subset D_1 \mid R = r) F'(r) \, dr,

where F(r) is the cumulative distribution function of R. For 0 \le r \le 1,

F(r) = \hbox{Pr}(R \le r) = \hbox{Pr}(P \in D_r) = \displaystyle \frac{\hbox{area}(D_r)}{\hbox{area}(D_1)} = \frac{\pi r^2}{\pi} = r^2.

Therefore,

\hbox{Pr}(C(P,Q) \subset D_1) = \displaystyle \int_0^1 \hbox{Pr}(C(P,Q) \subset D_1 \mid R = r) F'(r) \, dr

= \displaystyle \int_0^1 2 r \sqrt{1-r^2} \, dr.

To calculate this integral, I’ll use the trigonometric substitution u = 1-r^2. Then the endpoints r=0 and r=1 become u = \sqrt{1-0^2} = 1 and u = \sqrt{1-1^2} = 0. Also, du = -2r \, dr. Therefore,

\hbox{Pr}(C(P,Q) \subset D_1) = \displaystyle \int_0^1 2 r \sqrt{1-r^2} \, dr

= \displaystyle \int_1^0 -\sqrt{u} \, du

= \displaystyle \int_0^1 \sqrt{u} \, du

= \displaystyle \frac{2}{3} \left[  u^{3/2} \right]_0^1

=\displaystyle  \frac{2}{3}\left[ (1)^{3/2} - (0)^{3/2} \right]

= \displaystyle \frac{2}{3},

confirming the answer I had guessed from simulations.

Solving Problems Submitted to MAA Journals (Part 5d)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

In the previous two posts, I showed that

f(x) = - \displaystyle \frac{x \sin x}{2} \qquad \hbox{and} \qquad g(x) = \displaystyle \frac{x \cos x - \sin x}{2};

the technique that I used was using the Taylor series expansions of \sin x and \cos x to write f(x) and g(x) as double sums and then interchanging the order of summation.

In the post, I share an alternate way of solving for f(x) and g(x). I wish I could take credit for this, but I first learned the idea from my daughter. If we differentiate g(x), we obtain

g'(x) = \displaystyle \sum_{n=0}^\infty \left( [\sin x]' - [x]' + \left[\frac{x^3}{3!}\right]' - \left[\frac{x^5}{5!}\right]' \dots + \left[(-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!}\right]' \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{3x^2}{3!} - \frac{5x^4}{5!} \dots + (-1)^{n-1} \frac{(2n+1)x^{2n}}{(2n+1)!} \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{3x^2}{3 \cdot 2!} - \frac{5x^4}{5 \cdot 4!} \dots + (-1)^{n-1} \frac{(2n+1)x^{2n}}{(2n+1)(2n)!} \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

= f(x).

Something similar happens when differentiating the series for f(x); however, it’s not quite so simple because of the -1 term. I begin by separating the n=0 term from the sum, so that a sum from n =1 to \infty remains:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

= (\cos x - 1) + \displaystyle \sum_{n=1}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right).

I then differentiate as before:

f'(x) = (\cos x - 1)' + \displaystyle \sum_{n=1}^\infty \left( [\cos x - 1]' + \left[ \frac{x^2}{2!} \right]' - \left[ \frac{x^4}{4!} \right]' \dots + \left[ (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right]' \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + \frac{2x}{2!}  - \frac{4x^3}{4!} \dots + (-1)^{n-1} \frac{(2n) x^{2n-1}}{(2n)!} \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + \frac{2x}{2 \cdot 1!}  - \frac{4x^3}{4 \cdot 3!} \dots + (-1)^{n-1} \frac{(2n) x^{2n-1}}{(2n)(2n-1)!} \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + x - \frac{x^3}{3!} + \dots + (-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!} \right)

= -\sin x - \displaystyle \sum_{n=1}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!} \right).

At this point, we reindex the sum. We make the replacement k = n - 1, so that n = k+1 and k varies from k=0 to \infty. After the replacement, we then change the dummy index from k back to n.

f'(x) = -\sin x - \displaystyle \sum_{k=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{(k+1)-1} \frac{x^{2(k+1)-1}}{(2(k+1)-1)!} \right)

= -\sin x -  \displaystyle \sum_{k=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{k} \frac{x^{2k+1}}{(2k+1)!} \right)

= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{n} \frac{x^{2n+1}}{(2n+1)!} \right)

With a slight alteration to the (-1)^n term, this sum is exactly the definition of g(x):

f'(x)= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^1 (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right)

= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right)

= -\sin x - g(x).

Summarizing, we have shown that g'(x) = f(x) and f'(x) = -\sin x - g(x). Differentiating f'(x) a second time, we obtain

f''(x) = -\cos x - g'(x) = -\cos x - f(x)

or

f''(x) + f(x) = -\cos x.

This last equation is a second-order nonhomogeneous linear differential equation with constant coefficients. A particular solution, using the method of undetermined coefficients, must have the form F(x) = Ax\cos x + Bx \sin x. Substituting, we see that

[Ax \cos x + B x \sin x]'' + A x \cos x + Bx \sin x = -\cos x

-2A \sin x - Ax \cos x + 2B \cos x - B x \sin x + Ax \cos x + B x \sin x = -\cos x

-2A \sin x  + 2B \cos x = -\cos x

We see that A = 0 and B = -1/2, which then lead to the particular solution

F(x) = -\displaystyle \frac{1}{2} x \sin x

Since \cos x and \sin x are solutions of the associated homogeneous equation f''(x) + f(x) = 0, we conclude that

f(x) = c_1 \cos x + c_2 \sin x - \displaystyle \frac{1}{2} x \sin x,

where the values of c_1 and c_2 depend on the initial conditions on f. As it turns out, it is straightforward to compute f(0) and f'(0), so we will choose x=0 for the initial conditions. We observe that f(0) and g(0) are both clearly equal to 0, so that f'(0) = -\sin 0 - g(0) = 0 as well.

The initial condition f(0)=0 clearly imples that c_1 = 0:

f(0) = c_1 \cos 0 + c_2 \sin 0 - \displaystyle \frac{1}{2} \cdot 0 \sin 0

0 = c_1

To find c_2, we first find f'(x):

f'(x) = c_2 \cos x - \displaystyle \frac{1}{2} \sin x - \frac{1}{2} x \cos x

f'(0) = c_2 \cos 0 - \displaystyle  \frac{1}{2} \sin 0 - \frac{1}{2} \cdot 0 \cos 0

0 = c_2.

Since c_1 = c_2 = 0, we conclude that f(x) = - \displaystyle \frac{1}{2} x \sin x, and so

g(x) = -\sin x - f'(x)

= -\sin x - \displaystyle  \left( -\frac{1}{2} \sin x - \frac{1}{2} x \cos x \right)

= \displaystyle \frac{x \cos x - \sin x}{2}.

Solving Problems Submitted to MAA Journals (Part 5c)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

In the previous post, we showed that f(x) = - \frac{1}{2} x \sin x by writing the series as a double sum and then reversing the order of summation. We proceed with very similar logic to evaluate g(x). Since

\sin x = \displaystyle \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

is the Taylor series expansion of \sin x, we may write g(x) as

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!} - \sum_{k=0}^n (-1)^k \frac{x^{2k+1}}{(2k+1)!} \right)

= \displaystyle \sum_{n=0}^\infty \sum_{k=n+1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

As before, we employ one of my favorite techniques from the bag of tricks: reversing the order of summation. Also as before, the inner sum is inner sum is independent of n, and so the inner sum is simply equal to the summand times the number of terms. We see that

g(x) = \displaystyle \sum_{k=1}^\infty \sum_{n=0}^{k-1} (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \sum_{k=1}^\infty (-1)^k \cdot k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k+1}}{(2k+1)!}.

At this point, the solution for g(x) diverges from the previous solution for f(x). I want to cancel the factor of 2k in the summand; however, the denominator is

(2k+1)! = (2k+1)(2k)!,

and 2k doesn’t cancel cleanly with (2k+1). Hypothetically, I could cancel as follows:

\displaystyle \frac{2k}{(2k+1)!} = \frac{2k}{(2k+1)(2k)(2k-1)!} = \frac{1}{(2k+1)(2k-1)!},

but that introduces an extra (2k+1) in the denominator that I’d rather avoid.

So, instead, I’ll write 2k as (2k+1)-1 and then distribute and split into two different sums:

g(x) = \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1-1) \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty \left[ (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)!} - (-1)^k \cdot 1 \frac{x^{2k+1}}{(2k+1)!} \right]

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k  \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}.

At this point, I factored out a power of x from the first sum. In this way, the two sums are the Taylor series expansions of \cos x and \sin x:

g(x) = \displaystyle \frac{x}{2} \sum_{k=1}^\infty (-1)^k \cdot \frac{x^{2k}}{(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{x}{2} \cos x - \frac{1}{2} \sin x

= \displaystyle \frac{x \cos x - \sin x}{2}.

This was sufficiently complicated that I was unable to guess this solution by experimenting with Mathematica; nevertheless, Mathematica can give graphical confirmation of the solution since the graphs of the two expressions overlap perfectly.

Solving Problems Submitted to MAA Journals (Part 5b)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) =  \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

We start with f(x) and the Taylor series

\cos x = \displaystyle \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!}.

With this, f(x) can be written as

f(x) = \displaystyle \sum_{n=0}^\infty \left( \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!} - \sum_{k=0}^n (-1)^k \frac{x^{2k}}{(2k)!} \right)

= \displaystyle \sum_{n=0}^\infty \sum_{k=n+1}^\infty (-1)^k \frac{x^{2k}}{(2k)!}.

At this point, my immediate thought was one of my favorite techniques from the bag of tricks: reversing the order of summation. (Two or three chapters of my Ph.D. theses derived from knowing when to apply this technique.) We see that

f(x) = \displaystyle \sum_{k=1}^\infty \sum_{n=0}^{k-1} (-1)^k \frac{x^{2k}}{(2k)!}.

At this point, the inner sum is independent of n, and so the inner sum is simply equal to the summand times the number of terms. Since there are k terms for the inner sum (n = 0, 1, \dots, k-1), we see

f(x) =  \displaystyle \sum_{k=1}^\infty (-1)^k \cdot k \frac{x^{2k}}{(2k)!}.

To simplify, we multiply top and bottom by 2 so that the first term of (2k)! cancels:

f(x) = \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k}}{(2k)(2k-1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k}}{(2k-1)!}

At this point, I factored out a (-1) and a power of x to make the sum match the Taylor series for \sin x:

f(x) = \displaystyle -\frac{x}{2} \sum_{k=1}^\infty (-1)^{k-1} \frac{x^{2k-1}}{(2k-1)!} = -\frac{x \sin x}{2}.

I was unsurprised but comforted that this matched the guess I had made by experimenting with Mathematica.

Solving Problems Submitted to MAA Journals (Part 5a)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

\displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

\displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

When I first read this problem, I immediately noticed that

\displaystyle 1 - \frac{x^2}{2!} + \frac{x^4}{4!} \dots - (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

is a Taylor polynomial of \cos x and

\displaystyle x - \frac{x^3}{3!} + \frac{x^5}{5!} \dots - (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right)

is a Taylor polynomial of \sin x. In other words, the given expressions are the sums of the tail-sums of the Taylor series for \cos x and \sin x.

As usual when stumped, I used technology to guide me. Here’s the graph of the first sum, adding the first 50 terms.

I immediately notice that the function oscillates, which makes me suspect that the answer involves either \cos x or \sin x. I also notice that the sizes of oscillations increase as |x| increases, so that the answer should have the form g(x) \cos x or g(x) \sin x, where g is an increasing function. I also notice that the graph is symmetric about the origin, so that the function is even. I also notice that the graph passes through the origin.

So, taking all of that in, one of my first guesses was y = x \sin x, which is satisfies all of the above criteria.

That’s not it, but it’s not far off. The oscillations of my guess in orange are too big and they’re inverted from the actual graph in blue. After some guessing, I eventually landed on y = -\frac{1}{2} x \sin x.

That was a very good sign… the two graphs were pretty much on top of each other. That’s not a proof that -\frac{1}{2} x \sin x is the answer, of course, but it’s certainly a good indicator.

I didn’t have the same luck with the other sum; I could graph it but wasn’t able to just guess what the curve could be.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 9: Pedagogical Thoughts

At long last, we have reached the end of this series of posts.

The derivation is elementary; I’m confident that I could have understood this derivation had I seen it when I was in high school. That said, the word “elementary” in mathematics can be a bit loaded — this means that it is based on simple ideas that are perhaps used in a profound and surprising way. Perhaps my favorite quote along these lines was this understated gem from the book Three Pearls of Number Theory after the conclusion of a very complicated proof in Chapter 1:

You see how complicated an entirely elementary construction can sometimes be. And yet this is not an extreme case; in the next chapter you will encounter just as elementary a construction which is considerably more complicated.

Here are the elementary ideas from calculus, precalculus, and high school physics that were used in this series:

  • Physics
    • Conservation of angular momentum
    • Newton’s Second Law
    • Newton’s Law of Gravitation
  • Precalculus
    • Completing the square
    • Quadratic formula
    • Factoring polynomials
    • Complex roots of polynomials
    • Bounds on \cos \theta and \sin \theta
    • Period of \cos \theta and \sin \theta
    • Zeroes of \cos \theta and \sin \theta
    • Trigonometric identities (Pythagorean, sum and difference, double-angle)
    • Conic sections
    • Graphing in polar coordinates
    • Two-dimensional vectors
    • Dot products of two-dimensional vectors (especially perpendicular vectors)
    • Euler’s equation
  • Calculus
    • The Chain Rule
    • Derivatives of \cos \theta and \sin \theta
    • Linearizations of \cos x, \sin x, and 1/(1-x) near x \approx 0 (or, more generally, their Taylor series approximations)
    • Derivative of e^x
    • Solving initial-value problems
    • Integration by u-substitution

While these ideas from calculus are elementary, they were certainly used in clever and unusual ways throughout the derivation.

I should add that although the derivation was elementary, certain parts of the derivation could be made easier by appealing to standard concepts from differential equations.

One more thought. While this series of post was inspired by a calculation that appeared in an undergraduate physics textbook, I had thought that this series might be worthy of publication in a mathematical journal as an historical example of an important problem that can be solved by elementary tools. Unfortunately for me, Hieu D. Nguyen’s terrific article Rearing Its Ugly Head: The Cosmological Constant and Newton’s Greatest Blunder in The American Mathematical Monthly is already in the record.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 7a: Predicting Precession I

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity is

u(\theta) =  \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{ \delta\epsilon}{\alpha^2} \theta \sin \theta - \frac{ \delta \epsilon^2}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, and c is the speed of light.

We notice that the first term of the above solution,

\displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} ,

is the same as the solution found earlier under Newtonian physics, without general relativity. Therefore, the remaining terms describe the perturbation due to general relativity. All of these terms contain the small factor \delta, and so these can be expected to be small adjustments to an elliptical orbit.

Of these terms, the terms

\displaystyle \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}

are constants, while the terms

- \displaystyle \frac{ \delta \epsilon^2}{6\alpha^2} \cos 2\theta -  \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta

is bounded since -1 \le \cos \theta \le 1 and -1 \le \cos 2\theta \le 1. By contrast, the term

\displaystyle \frac{ \delta\epsilon}{\alpha^2} \theta \sin \theta

grows without bound. Therefore, for large values of \theta, the planet’s orbit may be accurately described by only including this last perturbation:

u(\theta) =  \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{ \delta\epsilon}{\alpha^2} \theta \sin \theta.

In the next post, we simplify this even further.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6i: Rationale for Method of Undetermined Coefficients VI

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

In the last few posts, I’ve used a standard technique from differential equations: to solve the nth order homogeneous differential equation with constant coefficients

a_n y^{(n)} + \dots + a_3 y''' + a_2 y'' + a_1 y' + a_0 y = 0,

we first solve the characteristic equation

a_n r^n + \dots + a_3 r^3 + a_2 r^2 + a_1 r + a_0 = 0

using techniques from Precalculus. The form of the roots r determines the solutions of the differential equation.

While this is a standard technique from differential equations, the perspective I’m taking in this series is scaffolding the techniques used to predict the precession in a planet’s orbit using only techniques from Calculus and Precalculus. So let me discuss why the above technique works, assuming that the characteristic equation does not have repeated roots. (The repeated roots case is a little more complicated but is not needed for the present series of posts.)

We begin by guessing that the above differential equation has a solution of the form y = e^{rt}. Differentiating, we find y' = re^{rt}, y'' = r^2 e^{rt}, etc. Therefore, the differential equation becomes

a_n r^n e^{rt} + \dots + a_3 r^3 e^{rt} + a_2 r^2 e^{rt} + a_1 r e^{rt} + a_0 e^{rt} = 0

e^{rt} \left(a_n r^n  + \dots + a_3 r^3 + a_2 r^2 + a_1 r  + a_0 \right) = 0

a_n r^n  + \dots + a_3 r^3 + a_2 r^2 + a_1 r  + a_0 = 0

The last step does not “lose” any possible solutions for r since e^{rt} can never be equal to 0. Therefore, solving the differential equation reduces to finding the roots of this polynomial, which can be done using standard techniques from Precalculus.

For example, one of the differential equations that we’ve encountered is y''+y=0. The characteristic equation is r^2+1=0, which has roots r=\pm i. Therefore, two solutions to the differential equation are e^{it} and e^{-it}, so that the general solution is

y = c_1 e^{it} + c_2 e^{-it}.

To write this in a more conventional way, we use Euler’s formula e^{ix} = \cos x + i \sin x, so that

y = c_1 (\cos t + i \sin t) + c_2 (\cos (-t) + i \sin (-t))

= c_1 \cos t + i c_1 \sin t + c_2 \cos t - i c_2 \sin t

= (c_1 + c_2) \cos t + (ic_1 - ic_2) \sin t

= C_1 \cos t + C_2 \sin t.

Likewise, in the previous post, we encountered the fourth-order differential equation y^{(4)}+5y''+4y = 0. To find the roots of the characteristic equation, we factor:

r^4 + 5r^2 + 4r = 0

(r^2+1)(r^2+4) = 0

r^2 +1 = 0 \qquad \hbox{or} \qquad \hbox{or} r^2 + 4 = 0

r = \pm i \qquad \hbox{or} \qquad r = \pm 2i.

Therefore, four solutions of this differential equation are e^{it}, e^{-it}, e^{2it}, and e^{-2it}, so that the general solution is

y = c_1 e^{it} + c_2 e^{-it} + c_3 e^{2it} + c_4 e{-2it}.

Using Euler’s formula as before, this can be rewritten as

y = C_1 \cos t + C_2 \sin t + C_3 \cos 2t + C_4 \sin 2t.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6h: Rationale for Method of Undetermined Coefficients V

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In the two previous posts, we derived the method of undetermined coefficients for the simplified differential equations

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

and

u''(\theta) + u(\theta) = \displaystyle \frac{2\delta \epsilon \cos \theta}{\alpha^2}.

In this post, we consider the simplified differential equation if the right-hand side has only the fifth term,

u''(\theta) + u(\theta) =  \displaystyle \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}.

Let v(\theta) = \displaystyle \frac{\delta \epsilon^2 }{2\alpha^2} \cos 2\theta. Then v satisfies the new differential equation v'' + 4v = 0. Also, v = u'' + u. Substituting, we find

(u''+u)'' + 4(u''+u) = 0

u^{(4)} + u'' + 4u'' + 4u = 0

u^{(4)} + 5u'' + 4u = 0

The characteristic equation of this new differential equation is

r^4 + 5r^2 + 4 = 0

(r^2 + 1)(r^2 + 4) = 0

r^2 + 1 = 0 \qquad \hbox{or} \qquad r^2 + 4 = 0

r = \pm i \qquad \hbox{or} \qquad r = \pm 2i

Therefore, the general solution of the new differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3 \cos 2\theta + c_4 \sin 2\theta.

The constants c_3 and c_4 can be found by substituting back into the original differential equation:

u''(\theta) + u(\theta) =  \displaystyle \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

-c_1 \cos \theta - c_2 \sin \theta - 4c_3 \cos 2\theta - 4c_4 \sin 2\theta + c_1 \cos \theta + c_2 \sin \theta + c_3 \cos 2\theta + c_4 \sin 2\theta = \displaystyle \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

- 3c_3 \cos 2\theta - 3c_4 \sin 2\theta  = \displaystyle \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

Matching coefficients, we see that c_3 = \displaystyle -\frac{\delta \epsilon^2}{6\alpha^2} and c_4 = 0. Therefore, the solution of the simplified differential equation is

u(\theta) = c_1 \theta + c_2 \theta \displaystyle -\frac{\delta \epsilon^2}{6\alpha^2} \cos 2\theta.

In particular, setting c_1 = 0 and c_2 = 0, we see that

u(\theta) =  \displaystyle -\frac{\delta \epsilon^2}{6\alpha^2} \cos 2\theta

is a particular solution to the simplified differential equation.

In the next post, we put together the solutions of these three simplified differential equations to solve the original differential equation,

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6d: Rationale for Method of Undetermined Coefficeints I

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of constant coefficients. To use this technique, we first expand the right-hand side of the differential equation and then apply a power-reduction trigonometric identity:

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta  \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos^2 \theta}{\alpha^2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2}{\alpha^2} \frac{1 + \cos 2\theta}{2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

This is now in the form for using the method of undetermined coefficients. However, in this series, I’d like to take some time to explain why this technique actually works. To begin, we look at a simplified differential equation using only the first three terms on the right-hand side:

u''(\theta) + u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  .

Let v(\theta) =\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  . Since v is a constant, this function satisfies the simple differential equation v' = 0. Since u''+u=v, we can substitute:

(u'' + u)' = 0

u''' + u' = 0

(We could have more easily said, “Take the derivative of both sides,” but we’ll be using a more complicated form of this technique in future posts.) The characteristic equation of this differential equation is r^3 + r = 0. Factoring, we obtain r(r^2 + 1) = 0, so that the three roots are r = 0 and r = \pm i. Therefore, the general solution of this differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3.

Notice that this matches the outcome of blindly using the method of undetermined coefficients without conceptually understanding why this technique works.

The constants c_1 and c_2 are determined by the initial conditions. To find c_3, we observe

u''(\theta) +u(\theta) =  -c_1 \cos \theta - c_2 \sin \theta +c_1 \cos \theta + c_2 \sin \theta + c_3

\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  = c_3.

Therefore, the general solution of this simplified differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

Furthermore, setting c_1 = c_2 = 0, we see that

u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}

is a particular solution to the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} .

In the next couple of posts, we find the particular solutions associated with the other terms on the right-hand side.