Confirming Einstein’s Theory of General Relativity With Calculus, Part 6g: Rationale for Method of Undetermined Coefficients IV

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In this post, we will use the guesses

u(\theta) = f(\theta) \cos \theta \qquad \hbox{or} u(\theta) = f(\theta) \sin \theta

that arose from the technique/trick of reduction of order, where f(\theta) is some unknown function, to find the general solution of the differential equation

u^{(4)} + 2u'' + u = 0.

To do this, we will need to use the Product Rule for higher-order derivatives that was derived in the previous post:

(fg)'' = f'' g + 2 f' g' + f g''

and

(fg)^{(4)} = f^{(4)} g + 4 f''' g' + 6 f'' g'' + 4f' g''' + f g^{(4)}.

In these formulas, Pascal’s triangle makes a somewhat surprising appearance; indeed, this pattern can be proven with mathematical induction.

We begin with u(\theta) = f(\theta) \cos \theta. If g(\theta) = \cos \theta, then

g'(\theta) = - \sin \theta,

g''(\theta) = -\cos \theta,

g'''(\theta) = \sin \theta,

g^{(4)}(\theta) = \cos \theta.

Substituting into the fourth-order differential equation, we find the differential equation becomes

(f \cos \theta)^{(4)} + 2 (f \cos \theta)'' + f \cos \theta = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 6 f'' \cos \theta + 4 f' \sin \theta + f \cos \theta + 2 f'' \cos \theta - 4 f' \sin \theta - 2 f \cos \theta + f \cos \theta = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 6 f'' \cos \theta  + 2 f'' \cos \theta  = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 4 f'' \cos \theta = 0

The important observation is that the terms containing f and f' cancelled each other. This new differential equation doesn’t look like much of an improvement over the original fourth-order differential equation, but we can make a key observation: if f'' = 0, then differentiating twice more trivially yields f''' = 0 and f^{(4)} = 0. Said another way: if f'' = 0, then u(\theta) = f(\theta) \cos \theta will be a solution of the original differential equation.

Integrating twice, we can find f:

f''(\theta) = 0

f'(\theta) = c_1

f(\theta) = c_1 \theta + c_2.

Therefore, a solution of the original differential equation will be

u(\theta) = c_1 \theta \cos \theta + c_2 \cos \theta.

We now repeat the logic for u(\theta) = f(\theta) \sin \theta:

(f \sin \theta)^{(4)} + 2 (f \sin \theta)'' + f \sin \theta = 0

f^{(4)} \sin \theta + 4 f''' \cos \theta - 6 f'' \sin \theta - 4 f' \cos\theta + f \sin \theta + 2 f'' \sin \theta + 4 f' \cos \theta - 2 f \sin \theta + f \sin \theta = 0

f^{(4)} \sin\theta + 4 f''' \cos \theta - 6 f'' \sin \theta + 2 f'' \sin \theta = 0

f^{(4)} \sin\theta - 4 f''' \cos\theta - 4 f'' \sin\theta = 0.

Once again, a solution of this new differential equation will be f(\theta) = c_3 \theta + c_4, so that f'' = f''' = f^{(4)} = 0. Therefore, another solution of the original differential equation will be

u(\theta) = c_3 \theta \sin \theta + c_4 \sin \theta.

Adding these provides the general solution of the differential equation:

u(\theta) = c_1 \theta \cos \theta + c_2 \cos \theta + c_3 \theta \sin \theta + c_4 \sin \theta.

Except for the order of the constants, this matches the solution that was presented earlier by using techniques taught in a proper course in differential equations.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6f: Rationale for Method of Undetermined Coefficients III

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In the previous post, I used a standard technique from differential equations to find the general solution of

u^{(4)} + 2u'' + u = 0.

to be

u(theta) = c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta.

However, as much as possible in this series, I want to take the perspective of a talented calculus student who has not yet taken differential equations — so that the conclusion above is far from obvious. How could this be reasonable coaxed out of such a student?

To begin, we observe that the characteristic equation is

r^4 + 2r^2 + 1 = 0,

or

(r^2 + 1)^2 = 0.

Clearly this has the same roots as the simpler equation r^2 + 1 = 0, which corresponds to the second-order differential equation u'' + u = 0. We’ve already seen that u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta are solutions of this differential equation; perhaps they might also be solutions of the more complicated differential equation also? The answer, of course, is yes:

u_1^{(4)} + 2 u_1'' + u_1 = \cos \theta - 2 \cos \theta + \cos \theta = 0

and

u_2^{(4)} + 2u_2'' + u_2 = \sin \theta - 2 \sin \theta + \sin \theta = 0.

The far trickier part is finding the two additional solutions. To find these, we use a standard trick/technique called reduction of order. In this technique, we guess that any additional solutions much have the form of either

u(\theta) = f(\theta) \cos \theta \qquad \hbox{or} \qquad  u(\theta) = f(\theta) \sin \theta,

where f(\theta) is some unknown function that we’re multiplying by the solutions we already have. We then substitute this into the differential equation u^{(4)} + 2u'' + u = 0 to form a new differential equation for the unknown f, which we can (hopefully) solve.

Doing this will require multiple applications of the Product Rule for differentiation. We already know that

(fg)' = f' g + f g'.

We now differentiate again, using the Product Rule, to find (fg)'':

(fg)'' = ( [fg]')' = (f'g)' + (fg')'

= f''g + f' g' + f' g' + f g''

= f'' g + 2 f' g' + f g''.

We now differential twice more to find (fg)^{(4)}:

(fg)''' = ( [fg]'')' = (f''g)' + 2(f'g')' +  (fg'')'

= f'''g + f'' g' + 2f'' g' + 2f' g'' + f' g'' + f g'''

= f''' g + 3 f'' g' + 3 f' g'' + f g'''.

A good student may be able to guess the pattern for the next derivative:

(fg)^{(4)} = ( [fg]''')' = (f'''g)' + 3(f''g')' +3(f'g'')' + (fg''')'

= f^{(4)}g + f''' g' + 3f''' g' + 3f'' g'' + 3f'' g'' + 3f'g''' + f' g''' + f g^{(4)}

= f^{(4)} g + 4 f''' g' + 6 f'' g'' + 4f' g''' + f g^{(4)}.

In this way, Pascal’s triangle makes a somewhat surprising appearance; indeed, this pattern can be proven with mathematical induction.

In the next post, we’ll apply this to the solution of the fourth-order differential equation.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6e: Rationale for Method of Undetermined Coefficients II

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In the previous post, we derived the method of undetermined coefficients for the simplified differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

In this post, we consider the simplified differential equation if the right-hand side has only the fourth term,

u''(\theta) + u(\theta) =  \displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta.

Let v(\theta) =  \displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta. Then v satisfies the new differential equation v'' + v = 0. Since u'' + u = v, we may substitute:

(u''+u)'' + (u'' + u) = 0

u^{(4)} + u'' + u'' + u = 0

u^{(4)} + 2u'' + u = 0.

The characteristic equation of this homogeneous differential equation is r^4 + 2r^2 + 1 = 0, or (r^2+1)^2 = 0. Therefore, r = i and r = -i are both double roots of this quartic equation. Therefore, the general solution for u is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta.

Substituting into the original differential equation will allow for the computation of c_3 and c_4:

u''(\theta) + u(\theta) = -c_1 \cos \theta - c_2 \sin \theta - 2c_3 \sin \theta - c_3 \theta \cos \theta + 2c_4 \cos \theta - c_4 \theta \sin \theta

+   c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta

\displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta = - 2c_3 \sin \theta+ 2c_4 \cos \theta

Matching coefficients, we see that c_3 = 0 and c_4 = \displaystyle \frac{\delta \epsilon }{\alpha^2}. Therefore,

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + \displaystyle \frac{\delta \epsilon }{\alpha^2} \theta \sin \theta

is the general solution of the simplified differential equation. Setting c_1 = c_2 = 0, we find that

u(\theta) =  \displaystyle \frac{\delta \epsilon }{\alpha^2} \theta \sin \theta

is one particular solution of this simplified differential equation. Not surprisingly, this matches the result is the method of undetermined coefficients had been blindly followed.

As we’ll see in a future post, the presence of this \theta \sin \theta term is what predicts the precession of a planet’s orbit under general relativity.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6d: Rationale for Method of Undetermined Coefficeints I

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of constant coefficients. To use this technique, we first expand the right-hand side of the differential equation and then apply a power-reduction trigonometric identity:

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta  \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos^2 \theta}{\alpha^2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2}{\alpha^2} \frac{1 + \cos 2\theta}{2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

This is now in the form for using the method of undetermined coefficients. However, in this series, I’d like to take some time to explain why this technique actually works. To begin, we look at a simplified differential equation using only the first three terms on the right-hand side:

u''(\theta) + u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  .

Let v(\theta) =\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  . Since v is a constant, this function satisfies the simple differential equation v' = 0. Since u''+u=v, we can substitute:

(u'' + u)' = 0

u''' + u' = 0

(We could have more easily said, “Take the derivative of both sides,” but we’ll be using a more complicated form of this technique in future posts.) The characteristic equation of this differential equation is r^3 + r = 0. Factoring, we obtain r(r^2 + 1) = 0, so that the three roots are r = 0 and r = \pm i. Therefore, the general solution of this differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3.

Notice that this matches the outcome of blindly using the method of undetermined coefficients without conceptually understanding why this technique works.

The constants c_1 and c_2 are determined by the initial conditions. To find c_3, we observe

u''(\theta) +u(\theta) =  -c_1 \cos \theta - c_2 \sin \theta +c_1 \cos \theta + c_2 \sin \theta + c_3

\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  = c_3.

Therefore, the general solution of this simplified differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

Furthermore, setting c_1 = c_2 = 0, we see that

u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}

is a particular solution to the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} .

In the next couple of posts, we find the particular solutions associated with the other terms on the right-hand side.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6c: Solving New Differential Equation with Variation of Parameters

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

Under general relativity, the motion of a planet around the Sun —in polar coordinates (r,\theta), with the Sun at the origin — satisfies the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of variation of parameters. First, we solve the associated homogeneous differential equation

u''(\theta) + u(\theta) = 0.

The characteristic equation of this differential equation is r^2 + 1 = 0, which clearly has the two imaginary roots r = \pm i. Therefore, two linearly independent solutions of the associated homogeneous equation are u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta.

(As an aside, this is one answer to the common question, “What are complex numbers good for?” The answer is naturally above the heads of Algebra II students when they first encounter the mysterious number i, but complex numbers provide a way of solving the differential equations that model multiple problems in statics and dynamics.)

According to the method of variation of parameters, the general solution of the original nonhomogeneous differential equation

u''(\theta) + u(\theta) = g(\theta)

is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta),

where

f_1(\theta) = -\displaystyle \int \frac{u_2(\theta) g(\theta)}{W(\theta)} d\theta ,

f_2(\theta) = \displaystyle \int \frac{u_1(\theta) g(\theta)}{W(\theta)} d\theta ,

and W(\theta) is the Wronskian of u_1(\theta) and u_2(\theta), defined by the determinant

W(\theta) = \displaystyle \begin{vmatrix} u_1(\theta) & u_2(\theta) \\ u_1'(\theta) & u_2'(\theta) \end{vmatrix}  = u_1(\theta) u_2'(\theta) - u_1'(\theta) u_2(\theta).

Well, that’s a mouthful.

The only good news is that W is easy to compute. Since u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta, we have

W(\theta) = (\cos \theta)(\cos \theta) - (\sin \theta)(-\sin \theta) = \cos^2 \theta + \sin^2 \theta = 1

from the usual Pythagorean trigonometric identity. Therefore, the denominators in the integrals for f_1(\theta) and f_2(\theta) essentially disappear.

Unfortunately, computing f_1(\theta) and f_2(\theta), using

g(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

is a beast, requiring the creative use of multiple trigonometric identities. We begin with f_1(\theta), using the substitution t = \cos \theta:

f_1(\theta) = -\displaystyle \int \left[ \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2 \right] \sin \theta \, d\theta

= \displaystyle \int \left[ \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon t}{\alpha} \right)^2 \right] \, dt

= \displaystyle \int \left[ \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon t}{\alpha} \right)^2 \right] \, dt

= \displaystyle \frac{t}{\alpha} +  \frac{\alpha\delta}{3\epsilon} \left( \frac{1 + \epsilon t}{\alpha} \right)^3 + a

= \displaystyle \frac{\cos \theta}{\alpha} +  \frac{\alpha\delta}{3\epsilon} \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^3 + a,

where we use +a for the constant of integration instead of the usual +C. Second,

f_2(\theta) = \displaystyle \int \left[ \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2 \right] \cos\theta \, d\theta.

Unfortunately, this is not easily simplified with a substitution, so we have to expand the integrand:

f_2(\theta) = \displaystyle \int \left[ \frac{\cos \theta}{\alpha} + \frac{\delta \cos \theta}{\alpha^2} + \frac{2 \delta \epsilon \cos^2 \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos^3 \theta}{\alpha^2} \right] \, d\theta

= \displaystyle \int \left[ \frac{\cos \theta}{\alpha} + \frac{\delta \cos \theta}{\alpha^2} + \frac{\delta \epsilon (1 + \cos 2 \theta)}{\alpha^2} + \frac{\delta \epsilon^2 \cos \theta \cos^2 \theta}{\alpha^2} \right] \, d\theta

= \displaystyle \int \left[ \frac{\cos \theta}{\alpha} + \frac{\delta \cos \theta}{\alpha^2} + \frac{\delta \epsilon}{\alpha^2} + \frac{\delta \epsilon \cos 2 \theta}{\alpha^2}+ \frac{\delta \epsilon^2 \cos \theta (1- \sin^2 \theta)}{\alpha^2} \right] \, d\theta

= \displaystyle \int \left[ \frac{\cos \theta}{\alpha} + \frac{\delta (1 + \epsilon^2) \cos \theta}{\alpha^2}  + \frac{\delta \epsilon}{\alpha^2} + \frac{\delta \epsilon \cos 2 \theta}{\alpha^2} - \frac{\delta \epsilon^2 \cos \theta \sin^2 \theta}{\alpha^2} \right] \, d\theta

= \displaystyle \frac{\sin\theta}{\alpha} + \frac{\delta (1 + \epsilon^2) \sin\theta}{\alpha^2}  + \frac{\delta \epsilon \theta}{\alpha^2} + \frac{\delta \epsilon \sin 2 \theta}{2\alpha^2} - \frac{\delta \epsilon^2 \sin^3 \theta}{3\alpha^2} + b,

using +b for the constant of integration. Therefore, by variation of parameters, the general solution of the nonhomogeneous differential equation is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta)

= \displaystyle \left( \frac{\cos \theta}{\alpha} +  \frac{\delta}{3\alpha^2\epsilon} \left( 1 + \epsilon \cos \theta \right)^3 + a \right) \cos \theta

+ \displaystyle \left(\frac{\sin\theta}{\alpha} + \frac{\delta (1 + \epsilon^2) \sin\theta}{\alpha^2}  + \frac{\delta \epsilon \theta}{\alpha^2} + \frac{\delta \epsilon \sin 2 \theta}{2\alpha^2} - \frac{\delta \epsilon^2 \sin^3 \theta}{3\alpha^2} + b  \right) \sin \theta

= a\cos \theta + b \sin \theta + \displaystyle \frac{\cos^2 \theta + \sin^2 \theta}{\alpha} +  \frac{\delta \cos \theta}{3\alpha^2\epsilon} +  \frac{\delta \cos^2 \theta}{\alpha^2} +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2}+  \frac{\delta\epsilon^2 \cos^4 \theta}{3\alpha^2}

+ \displaystyle \frac{\delta \sin^2 \theta}{\alpha^2} + \frac{\delta \epsilon^2 \sin^2\theta}{\alpha^2}  + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} + \frac{\delta \epsilon \sin \theta\sin 2 \theta}{2\alpha^2} - \frac{\delta \epsilon^2 \sin^4 \theta}{3\alpha^2}

= a\cos \theta + \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon} + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta (\cos^2 \theta+\sin^2 \theta)}{\alpha^2} +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2}+  \frac{\delta\epsilon^2 \cos^4 \theta}{3\alpha^2}

\displaystyle + \frac{\delta \epsilon^2 \sin^2\theta}{\alpha^2}  + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} + \frac{\delta \epsilon \sin \theta\sin 2 \theta}{2\alpha^2} - \frac{\delta \epsilon^2 \sin^4 \theta}{3\alpha^2}

= a\cos \theta+ \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon}  + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2} + \frac{\delta \epsilon \sin \theta\sin 2 \theta}{2\alpha^2}

\displaystyle + \frac{\delta \epsilon^2 \sin^2\theta}{\alpha^2}  + \frac{\delta \epsilon^2 (\cos^4 \theta -\sin^4 \theta)}{3\alpha^2}

= a\cos \theta + \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon} + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2}  +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2} + \frac{\delta \epsilon \sin \theta \cdot 2 \sin \theta \cos \theta}{2\alpha^2}

\displaystyle + \frac{\delta \epsilon^2 (1-\cos 2\theta)}{2\alpha^2}  + \frac{\delta \epsilon^2 (\cos^2 \theta +\sin^2\theta)(\cos^2 \theta - \sin^2 \theta)}{3\alpha^2}

= a\cos \theta+ \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon} + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2}  +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2} + \frac{\delta \epsilon \sin^2 \theta\cos 2 \theta}{\alpha^2}

\displaystyle + \frac{\delta \epsilon^2}{2\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}  + \frac{\delta \epsilon^2 \cos 2 \theta}{3\alpha^2}

= a\cos \theta + \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon}+ b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2}

\displaystyle   +  \frac{\delta \epsilon \cos^3 \theta}{\alpha^2} + \frac{\delta \epsilon \sin^2 \theta\cos \theta}{\alpha^2}

= a\cos \theta + \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon}+ b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2}

\displaystyle   +  \frac{\delta \epsilon \cos \theta (\cos^2 \theta + \sin^2 \theta)}{\alpha^2}

= a\cos \theta + \displaystyle \frac{\delta \cos \theta}{3\alpha^2\epsilon}+ b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2} +  \frac{\delta \epsilon \cos \theta}{\alpha^2}

= \displaystyle \left(a +  \displaystyle \frac{\delta}{3\alpha^2\epsilon} +  \frac{\delta \epsilon}{\alpha^2} \right) \cos \theta + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2}

= \displaystyle A \cos \theta + b \sin \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2} ,

where A is another arbitrary constant.

Next, we use the initial conditions to find the constants A and b. From the initial condition u(0) = \displaystyle \frac{1}{P} = \frac{1+\epsilon}{\alpha}, we obtain

u(0) = \displaystyle A \cos 0 + b \sin 0 + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \cdot 0 \cdot \sin 0}{\alpha^2} -\frac{\delta \epsilon^2 \cos 0}{6\alpha^2}

\displaystyle \frac{1+\epsilon}{\alpha} = A + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  -\frac{\delta \epsilon^2}{6\alpha^2}

\displaystyle \frac{\epsilon}{\alpha} = A + \displaystyle \frac{3\delta +\delta \epsilon^2}{3\alpha^2},

so that

A = \displaystyle \frac{\epsilon}{\alpha} - \frac{\delta(3 + \epsilon^2)}{3\alpha^2}.

Next, we compute u'(\theta) and use the initial condition u'(0) = 0:

u'(\theta) = \displaystyle -A \sin \theta + b \cos \theta + \frac{\delta \epsilon}{\alpha^2} (\sin \theta + \theta \cos \theta) + \frac{\delta \epsilon^2 \sin 2\theta}{3\alpha^2}

u'(0) = \displaystyle -A \sin 0 + b \cos 0 + \frac{\delta \epsilon}{\alpha^2} (\sin 0 + 0  \cos 0) + \frac{\delta \epsilon^2 \sin 0}{3\alpha^2}

0 = b.

Substituting these values for A and b, we finally arrive at the solution

u(\theta) = \displaystyle \left(\frac{\epsilon}{\alpha} - \frac{\delta(3 + \epsilon^2)}{3\alpha^2} \right) \cos \theta + \displaystyle \frac{1}{\alpha} +  \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\delta \epsilon \theta \sin \theta}{\alpha^2} -\frac{\delta \epsilon^2 \cos 2\theta}{6\alpha^2}

= \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{ \delta\epsilon}{\alpha^2} \theta \sin \theta - \frac{ \delta \epsilon^2}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6b: Checking Solution of New Differential Equation with Calculus

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

In the last post, we showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under general relativity the motion of the planet follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

I won’t sugar-coat it; the solution is a big mess:

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \theta \sin \theta - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta.

That said, it is an elementary, if complicated, exercise in calculus to confirm that this satisfies all three equations above. We’ll start with the second one:

u(0) = \displaystyle \frac{1 + \epsilon \cos 0}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \cdot 0 \sin 0 - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 0 - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos 0

= \displaystyle \frac{1 + \epsilon}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} - \frac{\epsilon^2 \delta}{6\alpha^2} - \frac{\delta(3+\epsilon^2)}{3\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon) + 6\delta + 3 \delta \epsilon^2 - \delta \epsilon^2 - 2\delta (3+\epsilon^2)}{6\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon) + 6\delta + 3 \delta \epsilon^2 - \delta \epsilon^2 - 6\delta - 2\delta \epsilon^2}{6\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon)}{6\alpha^2}

= \displaystyle \frac{ 1+\epsilon}{\alpha}

= \displaystyle \frac{1}{P},

where in the last step we used the equation P = \displaystyle \frac{\alpha}{1 + \epsilon} that was obtained earlier in this series.

Next, to check the initial condition u'(0) = 0, we differentiate:

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \theta \sin \theta - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta

u'(\theta) = \displaystyle -\frac{\epsilon \sin \theta}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (\sin \theta + \theta \cos \theta) + \frac{\epsilon^2 \delta}{3\alpha^2} \sin 2\theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \sin\theta

u'(0) = \displaystyle -\frac{\epsilon \sin 0}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (\sin 0 + 0 \cdot \cos 0) + \frac{\epsilon^2 \delta}{3\alpha^2} \sin 0 + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \sin 0 = 0.

Finally, to check the differential equation itself, we compute the second derivative:

u''(\theta) = \displaystyle -\frac{\epsilon \cos \theta}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (2 \cos \theta - \theta \sin \theta) + \frac{2\epsilon^2 \delta}{3\alpha^2} \cos 2\theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos\theta.

Adding u''(\theta) and u(\theta), we find

u''(\theta) + u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta - \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} (\theta \sin \theta + 2 \cos \theta - \theta \sin \theta)

\displaystyle - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta + \frac{2\epsilon^2 \delta}{3\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos\theta,

which simplifies considerably:

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2 \epsilon \delta}{\alpha^2} \cos \theta + \frac{\epsilon^2 \delta}{2\alpha^2} \cos 2\theta

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + \frac{\epsilon^2}{2} + 2 \epsilon \cos \theta + \frac{\epsilon^2}{2} \cos 2\theta \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + 2 \epsilon \cos \theta + \epsilon^2 \frac{1+\cos 2\theta}{2} \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + 2 \epsilon \cos \theta + \epsilon^2 \cos^2 \theta \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} (1 + \epsilon \cos \theta)^2

= \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1+\epsilon \cos \theta}{\alpha} \right)^2,

where we used the power-reduction trigonometric identity

\cos^2 \theta = \displaystyle \frac{1 + \cos 2\theta}{2}

on the second-to-last step.

While we have verified the proposed solution of the initial-value problem, and the steps for doing so lie completely within the grasp of a good calculus student, I’ll be the first to say that this solution is somewhat unsatisfying: the solution appeared seemingly out of thin air, and we just checked to see if this mysterious solution actually works. In the next few posts, I’ll discuss how this solution can be derived using standard techniques from first-semester differential equations.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6a: New Differential Equation

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u_0''(\theta) + u_0(\theta) = \displaystyle \frac{Gm^2 M}{\ell^2},

where u_0 = 1/r, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, and \ell is the constant angular momentum of the planet. For simplicity, we wrote \displaystyle \frac{1}{\alpha} = \displaystyle \frac{Gm^2 M}{\ell^2}.

We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u_0(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u_0'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

In previous posts, we showed that the solution of this initial-value problem is

u_0(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha},

where \epsilon = \displaystyle \frac{\alpha - P}{P}. Since r = 1/u_0, we see that the planet’s orbit satisfies

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta},

so that, as shown earlier in this series, the orbit is an ellipse with eccentricity \epsilon.

At long last, we’re now ready to see what happens under general relativity. According to the theory of general relativity, the governing differential equation for the orbit of a planet should be changed ever so slightly to

u''(\theta) + u(\theta) = \displaystyle \frac{Gm^2 M}{\ell^2} + \frac{3GM}{c^2} [u(\theta)]^2,

where a second term was added to the right-hand side. (I will make no attempt here to justify the physics for this second term.) In this second term, c represents the speed of light. Since c is very large, this second term is very, very small. Using \displaystyle \frac{1}{\alpha} = \displaystyle \frac{Gm^2 M}{\ell^2} as before and defining \delta = \displaystyle \frac{3GM}{c^2}, this may be simplified to

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta [u(\theta)]^2,

Finding an exact solution of this new differential equation is hopeless. While the previous differential equation was linear with constant coefficients, this new differential is decidedly nonlinear because of the new term [u(\theta)]^2.

So, instead of attempting to find an exact solution, we will use the method of successive approximations, mentioned earlier in this series, to find an approximate solution that is very, very close to the exact solution. Since \delta is very, very small, the solution of

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha}

will be approximately equal to the solution of the “real” differential equation under general relativity. We have already shown that the solution of this simpler differential equation is

u_0(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha}.

Therefore, the method of successive approximations suggests that a better approximation to the true orbit will be the solution of

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta [u_0(\theta)]^2,

or

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2.

While significantly messier than the differential equation under Newtonian mechanics, this is now a linear differential equation that can be solved using standard techniques from differential equations. In the next few posts, we will solve this new differential equation, thus finding the predicted orbit of a planet under general relativity.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 5d: Deriving Orbits under Newtonian Mechanics Using Variation of Parameters

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of variation of parameters. First, we solve the associated homogeneous differential equation

u''(\theta) + u(\theta) = 0.

The characteristic equation of this differential equation is r^2 + 1 = 0, which clearly has the two imaginary roots r = \pm i. Therefore, two linearly independent solutions of the associated homogeneous equation are u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta.

(As an aside, this is one answer to the common question, “What are complex numbers good for?” The answer is naturally above the heads of Algebra II students when they first encounter the mysterious number i, but complex numbers provide a way of solving the differential equations that model multiple problems in statics and dynamics.)

According to the method of variation of parameters, the general solution of the original nonhomogeneous differential equation

u''(\theta) + u(\theta) = g(\theta)

is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta),

where

f_1(\theta) = -\displaystyle \int \frac{u_2(\theta) g(\theta)}{W(\theta)} d\theta ,

f_2(\theta) = \displaystyle \int \frac{u_1(\theta) g(\theta)}{W(\theta)} d\theta ,

and W(\theta) is the Wronskian of u_1(\theta) and u_2(\theta), defined by the determinant

W(\theta) = \displaystyle \begin{vmatrix} u_1(\theta) & u_2(\theta) \\ u_1'(\theta) & u_2'(\theta) \end{vmatrix}  = u_1(\theta) u_2'(\theta) - u_1'(\theta) u_2(\theta).

Well, that’s a mouthful.

Fortunately, for the example at hand, these computations are pretty easy. First, since u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta, we have

W(\theta) = (\cos \theta)(\cos \theta) - (\sin \theta)(-\sin \theta) = \cos^2 \theta + \sin^2 \theta = 1

from the usual Pythagorean trigonometric identity. Therefore, the denominators in the integrals for f_1(\theta) and f_2(\theta) essentially disappear.

Since g(\theta) = \displaystyle \frac{1}{\alpha}, the integrals for f_1(\theta) and f_2(\theta) are straightforward to compute:

f_1(\theta) = -\displaystyle \int u_2(\theta) \frac{1}{\alpha} d\theta = -\displaystyle \frac{1}{\alpha} \int \sin \theta \, d\theta = \displaystyle \frac{1}{\alpha}\cos \theta + a,

where we use +a for the constant of integration instead of the usual +C. Second,

f_2(\theta) = \displaystyle \int u_1(\theta)  \frac{1}{\alpha} d\theta = \displaystyle \frac{1}{\alpha} \int \cos \theta \, d\theta = \displaystyle \frac{1}{\alpha}\sin \theta + b,

using +b for the constant of integration. Therefore, by variation of parameters, the general solution of the nonhomogeneous differential equation is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta)

= \left( \displaystyle \frac{1}{\alpha}\cos \theta + a \right) \cos \theta + \left( \displaystyle \frac{1}{\alpha}\sin\theta + b \right) \sin \theta

= a \cos \theta + b\sin \theta + \displaystyle \frac{\cos^2 \theta + \sin^2 \theta}{\alpha}

= a \cos \theta + b \sin \theta + \displaystyle \frac{1}{\alpha}.

Unsurprisingly, this matches the answer in the previous post that was found by the method of undetermined coefficients.

For the sake of completeness, I repeat the argument used in the previous two posts to determine a and b. This is require using the initial conditions u(0) = \displaystyle \frac{1}{P} and u'(0) = 0. From the first initial condition,

u(0) = a \cos 0 + b \sin 0 + \displaystyle \frac{1}{\alpha}

\displaystyle \frac{1}{P} = a + \frac{1}{\alpha}

\displaystyle \frac{1}{P} - \frac{1}{\alpha} = a

\displaystyle \frac{\alpha - P}{\alpha P} = a

From the second initial condition,

u'(\theta) = -a \sin \theta + b \cos \theta

u'(0) = -a \sin 0 + b \cos 0

0 = b.

From these two constants, we obtain

u(\theta) = \displaystyle \frac{\alpha - P}{\alpha P}  \cos \theta + 0 \sin \theta + \displaystyle \frac{1}{\alpha}

= \displaystyle \frac{1}{\alpha} \left(  1 + \frac{\alpha-P}{P} \cos \theta \right)

= \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha},

where \epsilon = \displaystyle \frac{\alpha - P}{P}.

Finally, since r = 1/u, we see that the planet’s orbit satisfies

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta},

so that, as shown earlier in this series, the orbit is an ellipse with eccentricity \epsilon.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 5c: Deriving Orbits under Newtonian Mechanics with the Method of Undetermined Coefficients

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of undetermined coefficients. First, we solve the associated homogeneous differential equation

u''(\theta) + u(\theta) = 0.

The characteristic equation of this differential equation is r^2 + 1 = 0, which clearly has the two imaginary roots r = \pm i. Therefore, the solution of the associated homogeneous equation is u_h(\theta) = a \cos \theta + b \sin \theta.

(As an aside, this is one answer to the common question, “What are complex numbers good for?” The answer is naturally above the heads of Algebra II students when they first encounter the mysterious number i, but complex numbers provide a way of solving the differential equations that model multiple problems in statics and dynamics.)

Next, we find a particular solution to the original differential equation. Since the right-hand side is a constant and r=0 is not a solution of the characteristic equation, this leads to trying something of the form U(\theta) = A as a solution, where A is a soon-to-be-determined constant. (Guessing this form of U(\theta) is a standard technique from differential equations; later in this series, I’ll give some justification for this guess.)

Clearly, U'(\theta) = 0 and U''(\theta) = 0. Substituting, we find

U''(\theta) + U(\theta) = \displaystyle \frac{1}{\alpha}

0 + A = \displaystyle \frac{1}{\alpha}

A = \displaystyle \frac{1}{\alpha}

Therefore, U(\theta) = \displaystyle \frac{1}{\alpha} is a particular solution of the nonhomogeneous differential equation.

Next, the general solution of the nonhomogeneous differential equation is found by adding the general solution to the associated homogeneous differential equation and the particular solution:

u(\theta) = u_h(\theta) + U(\theta) = a \cos \theta + b\sin \theta + \displaystyle \frac{1}{\alpha}.

Finally, to determine a and b, we use the initial conditions u(0) = \displaystyle \frac{1}{P} and u'(0) = 0. From the first initial condition,

u(0) = a \cos 0 + b \sin 0 + \displaystyle \frac{1}{\alpha}

\displaystyle \frac{1}{P} = a + \frac{1}{\alpha}

\displaystyle \frac{1}{P} - \frac{1}{\alpha} = a

\displaystyle \frac{\alpha - P}{\alpha P} = a

From the second initial condition,

u'(\theta) = -a \sin \theta + b \cos \theta

u'(0) = -a \sin 0 + b \cos 0

0 = b.

From these two constants, we obtain

u(\theta) = \displaystyle \frac{\alpha - P}{\alpha P}  \cos \theta + 0 \sin \theta + \displaystyle \frac{1}{\alpha}

= \displaystyle \frac{1}{\alpha} \left(  1 + \frac{\alpha-P}{P} \cos \theta \right)

= \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha},

where \epsilon = \displaystyle \frac{\alpha - P}{P}.

Finally, since r = 1/u, we see that the planet’s orbit satisfies

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta},

so that, as shown earlier in this series, the orbit is an ellipse with eccentricity \epsilon.

The reader will notice that this solution is pretty much a carbon-copy of the previous post. The difference is that calculus students wouldn’t necessarily be able to independently generate the solutions of the associated homogeneous differential equation and a particular solution of the nonhomogeneous differential equation, and so the line of questioning is designed to steer students toward the answer.